Commit graph

47 commits

Author SHA1 Message Date
sparky8512
8cbab00517
Note the Docker image as multi-arch 2022-12-21 10:41:17 -08:00
sparky8512
dc4ff85dbe A few tweaks to the Prometheus exporter script
Move the global state onto the http server object so it doesn't have to be accessed as module globals.

Limit the mode groups that can be selected via command line args to the ones that are actually parsed. There are a few other options added in dish_common that don't really apply to this script, but they are mostly harmless, whereas some of the other mode groups will cause this script to throw an exception.

Reject access to "/favicon.ico" path, so testing from a web browser does not result in running the dish queries twice, and thus confusing the global state a little.

Add a lock to serialize calls to dish_common.get_data. That function is not thread-safe, even with CPython's Global Interpreter Lock, because the starlink_grpc functions it calls block. This script is really not meant for concurrent HTTP access, given that the usage stats are reported as usage since last access (by default), but since it's technically supported, might as well have it work properly.

Add the same handling of keyboard interrupt (Ctrl-C) and SIGTERM signal as the other grpc scripts, along with proper shutdown.
2022-12-21 10:25:57 -08:00
luxifer
40cd0127dc add prometheus exporter 2022-11-15 16:45:24 +00:00
sparky8512
36f7b169a0
Correct another issue in the systemd instructions
As noted in issue #67.
2022-11-10 12:58:06 -08:00
jbuck2005
125e918aba
fix mkdir starlink-grpc-tool (missing s)
fix missing "s" in

mkdir starlink-grpc-tool
and
cd starlink-grpc-tool

install location pointed to by https://github.com/sparky8512/starlink-grpc-tools/blob/main/systemd/starlink-influx2.service shows WorkingDirectory=/opt/starlink-grpc-tools/
2022-11-10 12:56:10 -05:00
sparky8512
29699f0f59 Add data group for GPS location data
Add a new mode group, location, for physical location data. This requires access to the location data being enabled on the dish in order to return data; see README file for more details. At least for now, this functionality should be considered experimental.

This is being lumped into the "status" category for the purposes of database schema, but it's a separate grpc request from the other status data, and can fail at times when the other status request does not, so it has a separate function in the starlink_grpc module.
2022-09-10 14:43:50 -07:00
sparky8512
87e1f24e77
Clarify dump_dish_status.py requirements
Specifically, that the recommendation of skipping the generation of protocol modules does not apply, as discussed in issue #47.
2022-06-13 20:54:30 -07:00
sparky8512
30d20a4f5b
Note Python 3.7 as minimum version
While the scripts can mostly work with Python 3.5, the installation or pre-reqs gets really messy, per issue #45. Python 3.5 has been end-of-lifed for a while now, anyway.
2022-05-31 14:44:25 -07:00
sparky8512
e01aea6bd4 Add simple script to reboot, stow, or unstow dish
As reported in issue #43, the unstow operation is not quite as straightforward to perform using grpcurl as reboot or stow.

This also serves as a simplified example of using reflection (via yagrc) to perform grpc operations on the dish.
2022-04-19 11:17:14 -07:00
Dean Cording
b3f60b0858
Update README.md
Fixed typo
2022-03-02 16:23:17 +10:00
sparky8512
7dbb47ab40 New utility script to record dish protocol data
Not really Starlink specific, and pretty much redundant with similar functionality in grpcurl, but more appropriate for running periodically, as it will use a filename that should be specific to the protocol data content, so old data will not be overwritten when there is new data available.
2022-01-20 14:04:46 -08:00
sparky8512
0ace643acc A few more updates related to the new script
Add the  Python packages required for the InfluxDB 2.x client to Dockerfile, at the specific versions that I tested with. Most of the package dependencies overlap with the InfluxDB 1.x client package, so there was not much added.

Note the new script name in the relevant parts of the README.
2021-11-26 15:34:51 -08:00
Peter Hasse
d8cf8c6a7f updated readme for inlfuxdDB 2.x 2021-11-26 11:02:16 +01:00
sparky8512
859dc84b88 Documentation updates
A bunch of content from the README and get_history_notes.txt has been moved to the Wiki, as it is not critical to understanding how to install or use the scripts.

Move the checked in Grafana dashboard into a subdirectory in a feeble attempt to encourage other people to submit more of them.

Change the officially supported Docker image to the one published to GitHub Package repository by this project's workflow task.
2021-11-08 20:10:39 -08:00
sparky8512
1a9af6ad5d Interval loop support for obstruction maps
Tracked on issue #27
2021-09-08 13:45:36 -07:00
sparky8512
af940a9727 Improvements to how the -o option works
Change the loop polling function (-o) to aggregate the history data each polling loop instead of just keeping the last polled history so it can be logged when reboot is detected. This allows for computing statistics across a longer period than the size of the dish's history buffer, which has been reduced to 15 minutes recently.

This change also makes it so data is not logged right away when dish reboot is detected, so the logging always happens at the specified interval whether there was a reboot or not.

Finally, change the poll loop counting so data is not emitted on the first loop when polling is configured. That made sense to do when the history buffer was large enough to have the entire period's worth of data, but now it just results in a short period in the log output every time the script is restarted.

Fixes #29
2021-09-07 12:02:14 -07:00
sparky8512
0f540a4b96
Add note about reduction of history buffer data size 2021-08-19 13:38:36 -07:00
sparky8512
be776cde1c Resume from last counter for history stats
Currently only implemented for sqlite, since with InfluxDB, there is the complication that the InfluxDB server may not be available to query at script start time. Also, it only applies when polling all samples, which is not the default, and even then can be disabled with either --skip-query or --no-counter options.

Remove the crontab instructions from the README, since the periodic loop functionality is probably now a better approach for periodic recording of stats data.
2021-02-27 15:57:35 -08:00
sparky8512
206bbbf919
Switch two more references to new script name 2021-02-21 14:11:19 -08:00
sparky8512
258a33d62d Port grpc history features to JSON parser script
This brings most of the history-related functionality implemented in the grpc scripts to the JSON version, but only for text output. It also renames parserJsonHistory.py to dish_json_text.py, which removes the last remaining complaint from pylint about module name not conforming to style conventions.

A lot of this is just duplicated code from dish_common and dish_grpc_text, just simplified a little where some of the flexibility wasn't needed.

This removes compatibility with Python 2.7, because I didn't feel like reimplementing statistics.pstdev and didn't think such compatibility was particularly important.
2021-02-21 13:49:45 -08:00
sparky8512
ec61333710 Two more places that need the grpc imports 2021-02-12 13:48:55 -08:00
sparky8512
491227ddb4 Remove need to generate grpc modules via protoc
If the spacex grpc modules are not available in the import path, will now fall back to using reflection to get them dynamically. I'm not real happy with the mess this made of the import lines, though (and neither is pylint...), so I may hack on that a little further when I get the time.

Add a requirements.txt file to enable installation of all prerequisites so users don't have to follow the individual instructions for each dependency package. At some point, I'll really need to add proper Python packaging so the whole thing can just be installed via pip.

Rearrange the README a bit, since some of the sections have increased or decreased in relevance over time.
2021-02-12 13:11:54 -08:00
sparky8512
1ed0adc55a
Add new proto file to protoc instructions 2021-02-09 21:42:14 -08:00
sparky8512
a4adf3c383 Clarify a technical detail
... that I'm sure nobody cares about.
2021-02-04 20:05:04 -08:00
sparky8512
549a46ae56 Add new dish_grpc script for sqlite output
I'm sure this isn't a particularly optimal implementation, but it's functional.

This required exporting knowledge about the types that will be returned per field from starlink_grpc and moving things around a little in dish_common.
2021-02-03 17:23:01 -08:00
sparky8512
f5f1bbdb84 Clean up the simple example script and add new one
This adds an example script for the starlink_grpc module. It's a kinda silly thing, but I threw it together to better understand some of the status data, so I figured I'd upload it, since the other example is for direct grpc usage (or for starlink_json if parseJsonHistory can be considered an example).

Rename dishDumpStatus so pylint will stop complaining about the module name. The only script left with my old naming convention now is parseJsonHistory.py.
2021-02-01 20:47:18 -08:00
sparky8512
076e80c84a
Point out how to find the group/field descriptions 2021-01-30 13:29:17 -08:00
sparky8512
5c6a191660 Updates for new script naming and CLI
For now, the default docker command includes the altert detail but not the obstruction detail, because that's what the old dishStatusInflux.py script had.
2021-01-30 13:17:42 -08:00
sparky8512
a692f8930a
Add another potential TODO 2021-01-27 16:26:23 -08:00
sparky8512
2e045ade16 Add tracking of counter across script invocations
Write the sample counter value corresponding with the last recorded data point into the database along with the rest of the sample data so that it can be read out on next invocation of the script and data collection resumed where it left off.

Switch default sample count to all samples when in bulk mode, which now really means all samples since the last one recorded already.

Switch the time precision to be 1 second. Data points are only written one per second, anyway, and this way if there is any overlap due to counter tracking failure, the existing data will just get overwritten instead of creating duplicates.

Add a maximum queue length, so the script doesn't just keep using more memory if it persistently (>10 days) fails writing to the InfluxDB server.

Hack around some issues I ran into with the influxdb-python client library, especially with respect to running queries against InfluxDB 2.0 servers.

This concludes the functionality related to bulk collection of history data discussed on issue #5
2021-01-21 20:39:37 -08:00
sparky8512
9a57c93d73 Back out changing the default command options
Per review feedback, this could have interfered with the ability to set this option via environment variable. It was a bit messy, anyway.
2021-01-16 20:12:22 -08:00
sparky8512
9b04e8387c Minor changes based on thorough proof read 2021-01-16 10:33:22 -08:00
sparky8512
2e71acbbdb Changes to work better with Docker containers
Handle SIGTERM to enable graceful script shutdown when a container is stopped. This currently only matters for the InfluxDB scripts, and only when they run in a loop, since if the script is hard-terminated, it won't flush out any queued data points to the InfluxDB server. This also required changing the entrypoint script to exec python instead of running it as a child process of the shell running entrypoint.sh, since Docker will only deliver SIGTERM to the parent process it started directly.

Also, add -t 30 to the default Docker command to match the script default behavior prior to the changes in 46f65a6214
2021-01-16 10:17:32 -08:00
sparky8512
46f65a6214 Implement periodic loop option
Add an interval timing loop for all the grpc scripts that did not already have one. Organized some of the code into functions in order to facilitate this, which caused some problems with local variables vs global ones, so moved the script code into a proper main() function. Which didn't really solve the access to globals issue, so also moved the mutable state into a class instance.

The interval timer should be relatively robust against time drift due to the loop function running time and/or OS scheduler delay, but is far from perfect.

Retry logic is now in place for both InfluxDB scripts. Retry for dishStatusInflux.py is slightly changed in that a failed write to InfluxDB server will be retried on every interval, rather than waiting for another batch full of data points to write, but this only happens once there is at least a full batch (currently 6) of data points pending. This new behavior matches how the autocommit functionality on SeriesHelper works.

Changed the default behavior of dishStatusInflux.py to not loop, in order to match the other scripts. To get the old behavior, add a '-t 30' option to the command line.

Closes #9
2021-01-15 18:39:33 -08:00
Leigh Phillips
0ed3e074db
Update README.md 2021-01-11 17:24:10 -08:00
Leigh Phillips
eecc65a5ba
Update README.md 2021-01-11 00:01:50 -08:00
Leigh Phillips
d21b196f78
Update README.md 2021-01-11 00:00:49 -08:00
Leigh Phillips
ff2d0eacb1
Merge pull request #2 from sparky8512/main
SSL/TLS support for InfluxDB and MQTT scripts
2021-01-10 21:53:32 -08:00
sparky8512
ce44f3c021 SSL/TLS support for InfluxDB and MQTT scripts
SSL/TLS support for InfluxDB and MQTT scripts

Copy the command line option handling into the status scripts to facilitate this. Also copy the setting from env from dishStatusInflux_cron.py.

Better error handling for failures while writing to the data backend. Error printing verbosity is now a bit inconsistent, but I'll address that separately.

Still to be done is dishStatusInflux_cron.py, pending a decision on what to do with that script, given that dishStatusInflux.py can now be run in one-shot mode.

This is related to issue #2.
2021-01-10 21:36:44 -08:00
Leigh Phillips
27ce46cb3c
Update README.md
Name & run docker in daemon mode.
2021-01-09 13:22:55 -08:00
sparky8512
f067f08952 Add InfluxDB and MQTT history stats scripts
Unlike the status info scripts, these include support for setting host and database parameters via command line options. Still to be added is support for HTTPS/SSL.

Add a get_id function to the grpc parser module, so it can be used for tagging purposes.

Minor cleanups in some of the other scripts to make them consistent with the newly added scripts.
2021-01-09 12:03:37 -08:00
sparky8512
253d6e9250
Fix spelling error 2021-01-09 11:43:25 -08:00
Leigh Phillips
c28e025893
Update README.md 2021-01-08 22:41:57 -08:00
sparky8512
f206a3ad91 Add a short blurb for the recently added scripts 2020-12-30 11:47:03 -08:00
sparky8512
61e9abf7d4 Add header comments and update README
Also, add a timestamp to the CSV output, similar to what is in the get_history equivalent.
2020-12-29 20:45:30 -08:00
sparky8512
8d3aadad41
Fill out the README 2020-12-22 20:05:25 -08:00
sparky8512
33e14d2073
Initial commit 2020-12-22 14:40:56 -08:00