This should have gone in with change c35588d01f, but I forgot that the Docker config has a separate list of package versions so they can be pinned to specific version numbers, instead of just specifying a minimum.
This brings the fix for the protocol changes that removed some of the history fields into the JSON version of this module.
I'm really only bothering with this because the script that uses it is mentioned in the project README, which I am in the process of updating.
Add timeouts to all gRPC remote calls and bump yagrc package requirement to a version that does same for the reflection service, as well as fixing a state issue around failed lazy import resolution.
This should address the script hang symptom on issue #36.
last_24h_obstructed_s has been removed from the grpc service protocol, along with all the other deprecated fields. This renders the seconds_obstructed item in the status mode group useless.
This is the down side of using reflection at run time to pull the protocol definitions. If something gets removed from the protocol that is still being used, it will break the run time instead of just returning default values. Then again, if it gets removed from the protocol, it's no longer useful, anyway.
This should address issue #35.
Derive connectivity state information from the "outage" field of the get_status response, which I hadn't noticed before because it only populates when the dish is not in a connected state. This restores the state data item in the status mode group, which had been rendered useless due to a grpc service change.
In addition to the previous possible state names, this adds a few more that pertain to outages while otherwise connected, which I think were just previously reported as "CONNECTED", as well as some special cases of offline.
A recent firmware change has stopped populating a number of result message fields in the grpc service that had previously been marked as deprecated. This caused the script to start crashing in some use cases.
While those fields are still in the protocol definition for now, this change removes usage of them entirely, in case they get removed. As things are now, they are useless, anyway, since they will always just return default values.
This renders useless the state, snr, scheduled, obstructed, and unscheduled items in the status, bulk_history, and ping_drop mode groups. Those items now mostly return empty values.
See issue #32 for more detail.
The prior 2 changes made the handling of the first set of polled loops inconsistent with subsequent sets with respect to maintaining data across dish reboot. This change makes them both work the same (and correctly).
This restores the ability of -o to keep polled data across a reboot. It broke due to a simple issue in concatenate_history, but I realized that counter tracking for -o accumulated data was also a bit broken, so I fixed that, too.
Fix#31
Add a new command line script, dish_obstruction_map.py, that writes a PNG image based on the obstruction map data queried from the dish.
Supports color or greyscale output and either with or without alpha channel.
Does not yet support running in an interval loop, mostly because that will require templatizing the output filename in order to be useful.
Tracked on issue #27
Change the loop polling function (-o) to aggregate the history data each polling loop instead of just keeping the last polled history so it can be logged when reboot is detected. This allows for computing statistics across a longer period than the size of the dish's history buffer, which has been reduced to 15 minutes recently.
This change also makes it so data is not logged right away when dish reboot is detected, so the logging always happens at the specified interval whether there was a reboot or not.
Finally, change the poll loop counting so data is not emitted on the first loop when polling is configured. That made sense to do when the history buffer was large enough to have the entire period's worth of data, but now it just results in a short period in the log output every time the script is restarted.
Fixes#29
Add dish direction and "prolonged" obstruction info to the status mode group.
These were added to the grpc service at some point over the last several months.
Only lightly tested, given that my dish no longer reports significant periods of obstruction.
This is related to discussion in issue #27, although it doesn't address that issue in the slightest.
Not sure how I failed to notice this in testing, but for the cases where there is no data to output, the common layers pass up a None python object, and the text output was sometimes turning that into "None", whereas for CVS output, at least, just omitting any value in the field would be more appropriate.
Oddly, the bulk data path did have logic for turning None into empty string, but not the status or history stats code paths. This makes it so all text output has the same transformation logic.
New command line option, -N or --numeric, that will cause all boolean values, including those in sequences/arrays, to be written as 1 or 0 instead of True or False.
Per request in issue 26.
WARNING: Use or non-use of this option with the database output scripts will change the schema of the data. sqlite doesn't care about that, because it stores booleans as integers, anyway, but InfluxDB will trip an error if you try to record data points with this option to a database that has data point recorded without it, or vice versa.
Remove grpcurl and grpcio-tools from container configuration and add yagrc, so that the direct reflection support in the Python scripts can be used. Also, pin all Python packages, including package dependencies, to specific version numbers, since that was already the case by happenstance due to the way Docker caches its build images and an undesirable version of the protobuf package was being cached.
Addresses #23, which was directly about this change.
Expected to also address #22, as a result of pinning the protobuf package version.
Should also prevent a recurrence of #18, since yagrc will automatically get any new dependant protocol files via reflection.
statistics.quantiles was not present in Python 3.7 or earlier, which is a problem on Windows if you want to run a binary optimized version of the protobuf package, since those are not currently being posted for Python 3.8 or later.
This change switches to use the weighted median function just with equal weights. It's a bit of overkill, but it also cuts out the mess that was working around deficiencies of the statistics.quantiles implementation.
SpaceX has been using inconsistent field ordering when adding alerts, so field index cannot be used to consistently identify the specific alerts. Message number is more appropriate for that, anyway, but is not guaranteed to be a low enough number to fit into a bit field. Oh well, in the unlikely event that SpaceX switches to larger message numbers, they just won't show up in the alerts bit field (but will still show up in alert_detail).
This does make the bit ordering in alerts inconsistent with prior versions of these tools, but I've never actually seen one of these alerts report true, so hopefully this doesn't impact anyone.
The alerts are still sorted by index number in the alert_detail text output, which is a problem for CSV output, but I think ordering by message number instead would be pointlessly complex. alert_detail is not a great fit for CSV output anyway, due to its variable length, so just added a warning about that in the text script module doc.
Currently only implemented for sqlite, since with InfluxDB, there is the complication that the InfluxDB server may not be available to query at script start time. Also, it only applies when polling all samples, which is not the default, and even then can be disabled with either --skip-query or --no-counter options.
Remove the crontab instructions from the README, since the periodic loop functionality is probably now a better approach for periodic recording of stats data.
Got broken when I made a separate samples option for bulk history vs history stats. Also, it looks like I never actually implemented support for the --skip-query option.
A few things I noticed while porting this code to the JSON script. The only real change here is fixing the bulk history output to print UTC time instead of local time.
This brings most of the history-related functionality implemented in the grpc scripts to the JSON version, but only for text output. It also renames parserJsonHistory.py to dish_json_text.py, which removes the last remaining complaint from pylint about module name not conforming to style conventions.
A lot of this is just duplicated code from dish_common and dish_grpc_text, just simplified a little where some of the flexibility wasn't needed.
This removes compatibility with Python 2.7, because I didn't feel like reimplementing statistics.pstdev and didn't think such compatibility was particularly important.
This further complicates the code, for functionality that probably only I care about, but when computing stats for relatively long time intervals, it really hurts when the dish reboots and up to an entire time period's worth of data is lost at exactly the point where it may have been having interesting behavior.
Since the alert types are determined dynamically from the protocol definition, the status schema may need to be updated even if nothing changed in the scripts, when the dish software adds a new alert type (which just happened, say hello to the "mast_not_near_vertical" alert). This allows the manual override for that case, not just schema version downgrade.
Probably not terribly useful unless someone needs to tunnel through a different network to get to their dish, but it makes testing the dish unreachable case a lot easier. This was complicated a bit by the fact that a channel (and therefor the dish IP and port) is needed to get the list of alert types via reflection due to prior changes.
This exposed some issues with the error message for dish unreachable, so fixed those.
I had only added the one that the Starlink app uses to show obstructions, because it didn't seem like the other one was all that useful, but people seem to be interested in studying the difference between the 2, so might as well have it. This is in the obstruction_detail group, along with the other one. I'm kinda regretting naming the first one as I did, though, because it's now a little confusing between my naming and the naming in the grpc message.
Since this is a new field, also had to implement schema updates for the sqlite script.
If the spacex grpc modules are not available in the import path, will now fall back to using reflection to get them dynamically. I'm not real happy with the mess this made of the import lines, though (and neither is pylint...), so I may hack on that a little further when I get the time.
Add a requirements.txt file to enable installation of all prerequisites so users don't have to follow the individual instructions for each dependency package. At some point, I'll really need to add proper Python packaging so the whole thing can just be installed via pip.
Rearrange the README a bit, since some of the sections have increased or decreased in relevance over time.
I'm sure this isn't a particularly optimal implementation, but it's functional.
This required exporting knowledge about the types that will be returned per field from starlink_grpc and moving things around a little in dish_common.
Apparently, it was a little _too_ simple.
Also, update the description for seconds_to_first_nonempty_slot field to reflect some behavior this script was able to capture.
This adds an example script for the starlink_grpc module. It's a kinda silly thing, but I threw it together to better understand some of the status data, so I figured I'd upload it, since the other example is for direct grpc usage (or for starlink_json if parseJsonHistory can be considered an example).
Rename dishDumpStatus so pylint will stop complaining about the module name. The only script left with my old naming convention now is parseJsonHistory.py.
Add latency and usage stat groups to the stats computed from history samples. This includes an attempt at characterizing latency under network load, too, but I don't know how useful that's going to be, so I have marked that as experimental, in case it needs algorithmic improvements.
The new groups are enabled on the command line by use of the new mode names: ping_latency, ping_loaded_latency, and usage.
Add valid_s to the obstruction details status group. This was the only missing field from everything available in the status response (other than wedge_fraction_obstructed, which seems redundant to wedge_abs_fraction_obstructed), and I only skipped it because I don't know what it means exactly. Adding it now with my best guess at a description in order to avoid a compatibility breaking change later.
Closes#5