How-To News

Using Python to manage your miners; Part 2

Spring time is finally here and summer is around the corner.  Let’s dive into part two of our Python management script and start collecting data from our machines before it gets to hot outside to run them.  If you want to catch up, check out the first part of our tutorial before you join us back here.

Last week we built our first snippet of code to communicate with the cgminer API running on one of our Avalon lab machines.  To recap, we used a few Python packages to open a socket connection to the device, sent a command for summary information, and finally received the data back.

This week we’ll be adding onto that foundation by building a lightweight database to periodically retrieve and store that information so we can retrieve the data programmatically later.  We’ll use this data in our next installment to build some pretty graphs and create a web-based monitoring page.  Let’s get to it!

To lite or not to lite

In planning this installment of our guide, I was having difficulty picking a database format to use.  The most common database used for web development is MySQL and I would normally jump right into that.  In this case, however, I want to get us up and running quickly so we’ll be using MySQL’s lightweight cousin, SQLite. We’ll create a function for the database access and I’ll create a drop-in code swap for MySQL later this week.

SQLite will store the data in a local file which makes it easy to copy that data to other devices, and it shares 95% of the same basic commands we’ll need.  It also only needs a single Python package to begin, and we’ll save time by adding a database creation function in case we decide to blow away the data and start from scratch.  SQLite makes all of that much easier to manage than a full-blown MySQL install.

Let’s take a look at last weeks code before we start our upgrade:

#!/usr/bin/env python3

# A simple script for managing cgminer devices

import socket  # import the built-in package to open a TCP/IP socket to connect to miners
import json  # use the JSON package to format messages sent to the miner


def check_status(hostaddress):  # take the passed host IP address and send it a status command

    try:  # use the try command to help catch errors
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)  # use the socket package to create a new connection
        sock.settimeout(5)  # set the socket timeout to five second
        sock.connect((hostaddress, 4028))  # connect to the provided host

        message = json.dumps({'command': 'summary'})  # create the JSON formatted 'summary' command
        sock.sendto(message.encode(), (hostaddress, 4028))  # send the command to the miner
        response = sock.recv(4096).decode("ascii").rstrip(' \t\r\n\0')  # receive and clean up the response

        sock.shutdown(socket.SHUT_RDWR)  # send the shutdown to the socket so it closes the connection cleanly
        sock.close()  # close our socket

        response_decoded = json.loads(response)  # decode the JSON response so we can access the individual data later
        print(response_decoded)  # print the entire response received

    except Exception as e:
        print("Error: " + str(e))  # if there is an error, print it


def main():
    check_status("10.3.1.10")


main()

I’m going to be showing the lines we add individually so this post doesn’t scroll for miles.  We’ll show the complete code before we wrap it up each week, and you should always feel free to ask for clarification in the comments.

Let’s add our SQLite package to access the database and our os.path function so we can check for existing database files. We’ll also create the create_database() function and drop a ‘pass’ placeholder command for now:

#!/usr/bin/env python3

# A simple script for managing cgminer devices

import socket  # import the built-in package to open a TCP/IP socket to connect to miners
import json  # use the JSON package to format messages sent to the miner
import sqlite3  # PART TWO: import the SQLite package for database access and manipulation
import os.path  # PART TWO: used to determine if a file exists.  In our case, the SQLite database
import time  # this package will be used for timestamp functions

sqlite_file = 'engine.sqlite'  # PART TWO: name of the sqlite database file


def create_database():  # PART TWO: create a SQLite database
    pass

We also added a variable on line 11 above to point to where we expect the SQLite file to be. Now let’s add a few lines to the main() function to check for the existence of a database and fire the create_database() function if it doesn’t:

db_file = Path(sqlite_file)  # path to SQLite db
    if not db_file.is_file():  # if the database doesn't exist, create it
        create_database()

What to monitor

Before we build the tables for our new database, we need to have a brief conversation about what data we actually want to save. As a refresher, here’s the output we received from cgminer:

{'id': 1, 'SUMMARY': [{'Local Work': 823771, 'Discarded': 65690, 'Device Rejected%': 1.1347, 'MHS 5s': 9477273.36, 'Network Blocks': 191, 'Get Failures': 3, 'Pool Rejected%': 1.1374, 'Work Utility': 96901.61, 'MHS av': 6857775.14, 'MHS 1m': 7778772.41, 'Pool Stale%': 0.0, 'Remote Failures': 0, 'Found Blocks': 12, 'Rejected': 171, 'Total MH': 798932420667.0, 'Stale': 0, 'Difficulty Accepted': 185575360.0, 'MHS 15m': 7026118.23, 'Difficulty Stale': 0.0, 'Hardware Errors': 2174, 'MHS 5m': 7193833.22, 'Getworks': 3903, 'Device Hardware%': 0.0012, 'Best Share': 105136006, 'Last getwork': 1523980307, 'Difficulty Rejected': 2135009.0, 'Utility': 8.37, 'Elapsed': 116500, 'Accepted': 16261}], 'STATUS': [{'When': 1523980308, 'Msg': 'Summary', 'STATUS': 'S', 'Description': 'cgminer 4.10.0', 'Code': 11}]}

This is a ton of data and we don’t need to watch it all. For our example we’ll store the ‘Accepted’, ‘Device Rejected%’, ‘MHS 5m’ and the ‘Last getwork’ information. These metric will let us know how many shares we’ve had accepted, how many shares are rejected in percentage, the five minute average speed and when the last share was accepted. Cgminer provides more metrics via additional API commands, we’ll delve into those in more detail in the next few posts.

So four different data points need to go into our database, in addition to a timestamp so we can track when we collected the data. Here’s what the code to create this database looks like:

def create_database():  # PART TWO: create a SQLite database

    conn = sqlite3.connect(sqlite_file)  # create a new SQLite connection
    c = conn.cursor()  # create a new database object

    c.execute('CREATE TABLE monitor_history (time timestamp, accepted integer, rejected float, hashrate float, lastwork integer)')

    conn.commit()  # committing changes and closing the connection to the database file
    conn.close()

Reviewing the ins and outs of building the database is beyond the scope of this article, but line six in the above example basically lays out the format of the data tables. To learn more about SQLite feel free to check out this tutorial.

Storing the data

Now that we have a table structure and have created the corresponding database, it’s time to start storing our metrics. Let’s create a function that opens the newly created database and stores the data:

def store_status(accepted, rejected, hashrate, lastwork):

    timestamp = int(time.time())  # the current time in UNIX epoch, drop the decimal places
    conn = sqlite3.connect(sqlite_file)  # create a new SQLite connection
    c = conn.cursor()  # create a new database object
    c.execute("INSERT INTO monitor_history (timestamp, accepted, rejected, hashrate, lastwork) VALUES (?, ?, ?, ?, ?)",
              (timestamp, accepted, rejected, hashrate, lastwork))
    conn.commit()  # committing changes and closing the connection to the database file
    conn.close()

This function is what we will call when we want to insert a new record into our database. We’ll pass the function the accepted share, rejected percentage, five minute hashrate, and the time since last work. All of these variable are defined in line one above, in the parenthesis after ‘def store_status’, and will be passed from our original check_status() function. Let’s update that function to replace the printing of the data to the storage of the same information into our database:

def check_status(hostaddress):  # take the passed host IP address and send it a status command

    try:  # use the try command to help catch errors
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)  # use the socket package to create a new connection
        sock.settimeout(5)  # set the socket timeout to five second
        sock.connect((hostaddress, 4028))  # connect to the provided host

        message = json.dumps({'command': 'summary'})  # create the JSON formatted 'summary' command
        sock.sendto(message.encode(), (hostaddress, 4028))  # send the command to the miner
        response = sock.recv(4096).decode("ascii").rstrip(' \t\r\n\0')  # receive and clean up the response

        sock.shutdown(socket.SHUT_RDWR)  # send the shutdown to the socket so it closes the connection cleanly
        sock.close()  # close our socket

        response_decoded = json.loads(response)  # decode the JSON response so we can access the individual data later
        accepted = response_decoded["SUMMARY"][0]["Accepted"]
        rejected = response_decoded["SUMMARY"][0]["Pool Rejected%"]
        hashrate = response_decoded["SUMMARY"][0]["MHS 5m"]
        lastwork = response_decoded["SUMMARY"][0]["Last getwork"]

        store_status(accepted, rejected, hashrate, lastwork)  # send gathered data to the store_status function

At this point we should have a working script that take responses from cgminer and inputs them into a SQLite database. You’ll want to snag a free copy of DB Browser to take a look at your database. Run the script a few times and it should look something like this:

Before we wrap up this week, let’s add one extra line in our script so that it outputs a summary of the written data to the console:

def store_status(accepted, rejected, hashrate, lastwork):

    timestamp = int(time.time())  # the current time in UNIX epoch, drop the decimal places
    conn = sqlite3.connect(sqlite_file)  # create a new SQLite connection
    c = conn.cursor()  # create a new database object
    c.execute("INSERT INTO monitor_history (time, accepted, rejected, hashrate, lastwork) VALUES (?, ?, ?, ?, ?)",
              (timestamp, accepted, rejected, hashrate, lastwork))
    conn.commit()  # committing changes and closing the connection to the database file
    conn.close()

    print(str(timestamp) + ": Database entry added; " + str(accepted) + " accepted shares")

This line will add a brief description of the stored data when the script completes. If you are using PyCharm to follow along, you should see something similar to this:

Here’s the entire updated script from today’s post:

#!/usr/bin/env python3

# A simple script for managing cgminer devices

import socket  # import the built-in package to open a TCP/IP socket to connect to miners
import json  # use the JSON package to format messages sent to the miner
import sqlite3  # PART TWO: import the SQLite package for database access and manipulation
import os.path  # PART TWO: used to determine if a file exists.  In our case, the SQLite database
import time  # this package will be used for timestamp functions

sqlite_file = 'engine.sqlite'  # PART TWO: name of the sqlite database file


def create_database():  # PART TWO: create a SQLite database

    conn = sqlite3.connect(sqlite_file)  # create a new SQLite connection
    c = conn.cursor()  # create a new database object

    c.execute('CREATE TABLE monitor_history (time timestamp, accepted integer, rejected float, hashrate float, lastwork integer)')

    conn.commit()  # committing changes and closing the connection to the database file
    conn.close()


def store_status(accepted, rejected, hashrate, lastwork):

    timestamp = int(time.time())  # the current time in UNIX epoch, drop the decimal places
    conn = sqlite3.connect(sqlite_file)  # create a new SQLite connection
    c = conn.cursor()  # create a new database object
    c.execute("INSERT INTO monitor_history (time, accepted, rejected, hashrate, lastwork) VALUES (?, ?, ?, ?, ?)",
              (timestamp, accepted, rejected, hashrate, lastwork))
    conn.commit()  # committing changes and closing the connection to the database file
    conn.close()

    print(str(timestamp) + ": Database entry added; " + str(accepted) + " accepted shares")


def check_status(hostaddress):  # take the passed host IP address and send it a status command

    try:  # use the try command to help catch errors
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)  # use the socket package to create a new connection
        sock.settimeout(5)  # set the socket timeout to five second
        sock.connect((hostaddress, 4028))  # connect to the provided host

        message = json.dumps({'command': 'summary'})  # create the JSON formatted 'summary' command
        sock.sendto(message.encode(), (hostaddress, 4028))  # send the command to the miner
        response = sock.recv(4096).decode("ascii").rstrip(' \t\r\n\0')  # receive and clean up the response

        sock.shutdown(socket.SHUT_RDWR)  # send the shutdown to the socket so it closes the connection cleanly
        sock.close()  # close our socket

        response_decoded = json.loads(response)  # decode the JSON response so we can access the individual data later
        accepted = response_decoded["SUMMARY"][0]["Accepted"]
        rejected = response_decoded["SUMMARY"][0]["Pool Rejected%"]
        hashrate = response_decoded["SUMMARY"][0]["MHS 5m"]
        lastwork = response_decoded["SUMMARY"][0]["Last getwork"]

        store_status(accepted, rejected, hashrate, lastwork)  # send gathered data to the store_status function


    except Exception as e:
        print("Error: " + str(e))  # if there is an error, print it


def main():

    if not os.path.isfile(sqlite_file):  # if the database doesn't exist, create it
        create_database()
    check_status("10.10.2.192")


main()

Next up

So far we’ve built a script that pulls data from our miner and stores it. It isn’t particularly robust and we’ll need to build additional error trapping as we continue. In my next installment we’ll pull information from our database and create a web accessible page we can use to monitor our miner in (almost) real-time. As always, let me know if you have any questions in the comments, or tweet your thoughts to @HashesPerSecond. Happy mining!

Leave a Reply