If the spacex grpc modules are not available in the import path, will now fall back to using reflection to get them dynamically. I'm not real happy with the mess this made of the import lines, though (and neither is pylint...), so I may hack on that a little further when I get the time.
Add a requirements.txt file to enable installation of all prerequisites so users don't have to follow the individual instructions for each dependency package. At some point, I'll really need to add proper Python packaging so the whole thing can just be installed via pip.
Rearrange the README a bit, since some of the sections have increased or decreased in relevance over time.
I'm sure this isn't a particularly optimal implementation, but it's functional.
This required exporting knowledge about the types that will be returned per field from starlink_grpc and moving things around a little in dish_common.
This adds an example script for the starlink_grpc module. It's a kinda silly thing, but I threw it together to better understand some of the status data, so I figured I'd upload it, since the other example is for direct grpc usage (or for starlink_json if parseJsonHistory can be considered an example).
Rename dishDumpStatus so pylint will stop complaining about the module name. The only script left with my old naming convention now is parseJsonHistory.py.
For now, the default docker command includes the altert detail but not the obstruction detail, because that's what the old dishStatusInflux.py script had.
Write the sample counter value corresponding with the last recorded data point into the database along with the rest of the sample data so that it can be read out on next invocation of the script and data collection resumed where it left off.
Switch default sample count to all samples when in bulk mode, which now really means all samples since the last one recorded already.
Switch the time precision to be 1 second. Data points are only written one per second, anyway, and this way if there is any overlap due to counter tracking failure, the existing data will just get overwritten instead of creating duplicates.
Add a maximum queue length, so the script doesn't just keep using more memory if it persistently (>10 days) fails writing to the InfluxDB server.
Hack around some issues I ran into with the influxdb-python client library, especially with respect to running queries against InfluxDB 2.0 servers.
This concludes the functionality related to bulk collection of history data discussed on issue #5
Handle SIGTERM to enable graceful script shutdown when a container is stopped. This currently only matters for the InfluxDB scripts, and only when they run in a loop, since if the script is hard-terminated, it won't flush out any queued data points to the InfluxDB server. This also required changing the entrypoint script to exec python instead of running it as a child process of the shell running entrypoint.sh, since Docker will only deliver SIGTERM to the parent process it started directly.
Also, add -t 30 to the default Docker command to match the script default behavior prior to the changes in 46f65a6214
Add an interval timing loop for all the grpc scripts that did not already have one. Organized some of the code into functions in order to facilitate this, which caused some problems with local variables vs global ones, so moved the script code into a proper main() function. Which didn't really solve the access to globals issue, so also moved the mutable state into a class instance.
The interval timer should be relatively robust against time drift due to the loop function running time and/or OS scheduler delay, but is far from perfect.
Retry logic is now in place for both InfluxDB scripts. Retry for dishStatusInflux.py is slightly changed in that a failed write to InfluxDB server will be retried on every interval, rather than waiting for another batch full of data points to write, but this only happens once there is at least a full batch (currently 6) of data points pending. This new behavior matches how the autocommit functionality on SeriesHelper works.
Changed the default behavior of dishStatusInflux.py to not loop, in order to match the other scripts. To get the old behavior, add a '-t 30' option to the command line.
Closes#9
SSL/TLS support for InfluxDB and MQTT scripts
Copy the command line option handling into the status scripts to facilitate this. Also copy the setting from env from dishStatusInflux_cron.py.
Better error handling for failures while writing to the data backend. Error printing verbosity is now a bit inconsistent, but I'll address that separately.
Still to be done is dishStatusInflux_cron.py, pending a decision on what to do with that script, given that dishStatusInflux.py can now be run in one-shot mode.
This is related to issue #2.
Unlike the status info scripts, these include support for setting host and database parameters via command line options. Still to be added is support for HTTPS/SSL.
Add a get_id function to the grpc parser module, so it can be used for tagging purposes.
Minor cleanups in some of the other scripts to make them consistent with the newly added scripts.