No description
Find a file
sparky8512 8d2b25b971 Proper error handling
Print an error when getting obstruction map data from the dish fails, rather than crashing the script with an exception.

As reported in issue #45.
2022-05-25 16:30:56 -07:00
.github/workflows Rename action to disambiguate 2021-10-25 14:01:56 -07:00
dashboards added grafana dashboard for influx2 2021-11-26 11:02:51 +01:00
systemd Typo fixed 2022-03-02 16:15:05 +10:00
dish_common.py Tweaks/fixes mostly related to --poll-loops 2022-02-20 13:39:07 -08:00
dish_control.py Add simple script to reboot, stow, or unstow dish 2022-04-19 11:17:14 -07:00
dish_grpc_influx.py Catch and ignore KeyboardInterrupt exception 2022-03-02 14:48:51 -08:00
dish_grpc_influx2.py Catch and ignore KeyboardInterrupt exception 2022-03-02 14:48:51 -08:00
dish_grpc_mqtt.py Document different topic format for JSON mode 2022-03-02 15:10:31 -08:00
dish_grpc_sqlite.py Catch and ignore KeyboardInterrupt exception 2022-03-02 14:48:51 -08:00
dish_grpc_text.py Force line buffering when output is stdout 2022-04-13 08:54:25 -07:00
dish_json_text.py Port grpc history features to JSON parser script 2021-02-21 13:49:45 -08:00
dish_obstruction_map.py Proper error handling 2022-05-25 16:30:56 -07:00
Dockerfile A few more updates related to the new script 2021-11-26 15:34:51 -08:00
dump_dish_status.py Fixes related to failed grpc network connection 2021-11-06 20:16:50 -07:00
entrypoint.sh Switch docker reflection from grpcurl to yagrc 2021-03-23 18:32:00 -07:00
extract_protoset.py New utility script to record dish protocol data 2022-01-20 14:04:46 -08:00
LICENSE Initial commit 2020-12-22 14:40:56 -08:00
poll_history.py Catch GrpcError in new example 2021-02-02 09:07:42 -08:00
README.md Add simple script to reboot, stow, or unstow dish 2022-04-19 11:17:14 -07:00
requirements.txt updates requirements for influx2 2021-11-24 15:09:03 +01:00
starlink_grpc.py New validity check for prolonged obstruction data 2022-03-26 15:22:02 -07:00
starlink_json.py Port recent changes from starlink_grpc.py 2021-11-08 19:52:21 -08:00

starlink-grpc-tools

This repository has a handful of tools for interacting with the gRPC service implemented on the Starlink user terminal (AKA "the dish").

For more information on what Starlink is, see starlink.com and/or the r/Starlink subreddit.

Prerequisites

Most of the scripts here are Python scripts. To use them, you will either need Python installed on your system or you can use the Docker image. If you use the Docker image, you can skip the rest of the prerequisites other than making sure the dish IP is reachable and Docker itself. For Linux systems, the python package from your distribution should be fine, as long as it is Python 3.

All the tools that pull data from the dish expect to be able to reach it at the dish's fixed IP address of 192.168.100.1, as do the Starlink Android app, iOS app, and the browser app you can run directly from http://192.168.100.1. When using a router other than the one included with the Starlink installation kit, this usually requires some additional router configuration to make it work. That configuration is beyond the scope of this document, but if the Starlink app doesn't work on your home network, then neither will these scripts. That being said, you do not need the Starlink app installed to make use of these scripts. See here for more detail on this.

Running the scripts within a Docker container requires Docker to be installed. Information about how to install that can be found at https://docs.docker.com/engine/install/. See below for how to pull the starlink-grpc-tools container image.

Required Python modules (for non-Docker usage)

The easiest way to get the Python modules used by the scripts is to do the following, which will install latest versions of a superset of the required modules:

pip install --upgrade -r requirements.txt

If you really care about the details here or wish to minimize your package requirements, you can find more detail about which specific modules are required for what usage in this Wiki article.

Generating the gRPC protocol modules (for non-Docker usage)

This step is no longer required, nor is it particularly recommended, so the details have been moved to this Wiki article.

Usage

Of the 3 groups below, the grpc scripts are really the only ones being actively developed. The others are mostly by way of example of what could be done with the underlying data.

The grpc scripts

This set of scripts includes dish_grpc_text.py, dish_grpc_influx.py, dish_grpc_influx2.py, dish_grpc_sqlite.py, and dish_grpc_mqtt.py. They mostly support the same functionality, but write their output in different ways. dish_grpc_text.py writes data to standard output, dish_grpc_influx.py and dish_grpc_influx2.py send it to an InfluxDB 1.x and 2.x server, respectively, dish_grpc_sqlite.py writes it to a sqlite database, and dish_grpc_mqtt.py sends it to a MQTT broker.

All these scripts support processing status data and/or history data in various modes. The status data is mostly what appears related to the dish in the Debug Data section of the Starlink app, whereas most of the data displayed in the Statistics page of the Starlink app comes from the history data. Specific status or history data groups can be selected by including their mode names on the command line. Run the scripts with -h command line option to get a list of available modes. See the documentation at the top of starlink_grpc.py for detail on what each of the fields means within each mode group.

For example, data from all the currently available status groups can be output by doing:

python3 dish_grpc_text.py status obstruction_detail alert_detail

By default, dish_grpc_text.py will output in CSV format. You can use the -v option to instead output in a (slightly) more human-readable format.

By default, all of these scripts will pull data once, send it off to the specified data backend, and then exit. They can instead be made to run in a periodic loop by passing a -t option to specify loop interval, in seconds. For example, to capture status information to a InfluxDB server every 30 seconds, you could do something like this:

python3 dish_grpc_influx.py -t 30 [... probably other args to specify server options ...] status

Some of the scripts (currently only the InfluxDB ones) also support specifying options through environment variables. See details in the scripts for the environment variables that map to options.

Bulk history data collection

dish_grpc_influx.py, dish_grpc_influx2.py, dish_grpc_sqlite.py, and dish_grpc_text.py also support a bulk history mode that collects and writes the full second-by-second data instead of summary stats. To select bulk mode, use bulk_history for the mode argument. You'll probably also want to use the -t option to have it run in a loop.

Polling interval

A recent (as of 2021-Aug) change in the dish firmware appears to have reduced the amount of history data returned from the most recent 12 hours to the most recent 15 minutes, so if you are using the -t option to poll either bulk history or history-based statistics, you should choose an interval less than 900 seconds; otherwise, you will not capture all the data.

Computing history statistics (one or more of groups ping_drop, ping_run_length, ping_latency, ping_loaded_latency, and usage) across periods longer than the 15 minute history buffer may be done by combining the -t and -o options. The history data will be polled at the interval specified by the -t option, but it will be aggregated the number of times specified by the -o option and statistics will be computed against the aggregated data which will be a period of the -t option value times the -o option value. For example, the following:

python3 dish_grpc_text.py -t 60 -o 60 ping_drop 

will poll history data once per minute, but compute statistics only once per hour. This also reduces data loss due to a dish reboot, since the -o option will aggregate across reboots, too.

The obstruction map script

dish_obstruction_map.py is a little different in that it doesn't write to a database, but rather writes PNG images to the local filesystem. To get a single image of the current obstruction map using the default colors, you can do the following:

python3 dish_obstruction_map.py obstructions.png

or to run in a loop writing a sequence of images once per hour, you can do the following:

python3 dish_obstruction_map.py -t 3600 obstructions_%s.png

Run it with the -h command line option for full usage details, including control of the map colors and color modes.

Reboot and stow control

dish_control.sh is a simple stand alone script that can issue reboot, stow, or unstow commands to the dish:

python3 dish_control.py reboot
python3 dish_control.py stow
python3 dish_control.py unstow

These operations can also be done using grpcurl, thus avoiding the need to use Python or install the required Python module dependencies. See here for specific grpcurl commands for these operations.

The JSON parser script

dish_json_text.py operates on a JSON format data representation of the protocol buffer messages, such as that output by gRPCurl. The command lines below assume grpcurl is installed in the runtime PATH. If that's not the case, just substitute in the full path to the command.

dish_json_text.py is similar to dish_grpc_text.py, but it takes JSON format input from a file instead of pulling it directly from the dish via grpc call. It also does not support the status info modes, because those are easy enough to interpret directly from the JSON data. The easiest way to use it is to pipe the grpcurl command directly into it. For example:

grpcurl -plaintext -d {\"get_history\":{}} 192.168.100.1:9200 SpaceX.API.Device.Device/Handle | python3 dish_json_text.py ping_drop

For more usage options, run:

python3 dish_json_text.py -h

The one bit of functionality this script has over the grpc scripts is that it supports capturing the grpcurl output to a file and reading from that, which may be useful if you're collecting data in one place but analyzing it in another. Otherwise, it's probably better to use dish_grpc_text.py, described above.

Other scripts

dump_dish_status.py is a simple example of how to use the grpc modules (the ones generated by protoc, not starlink_grpc) directly. Just run it as:

python3 dump_dish_status.py

and revel in copious amounts of dish status information. OK, maybe it's not as impressive as all that. This one is really just meant to be a starting point for real functionality to be added to it. The grpc example in this script assumes you have the generated gRPC protocol modules. For a (relatively) simple example of using reflection to avoid that requirement, see dish_control.py.

poll_history.py is another silly example, but this one illustrates how to periodically poll the status and/or bulk history data using the starlink_grpc module's API. It's not really useful by itself, but if you really want to, you can run it as:

python3 poll_history.py

Possibly more simple examples to come, as the other scripts have started getting a bit complicated.

extract_protoset.py can be used in place of grpcurl for recording the dish protocol information. See the related Wiki article for more details.

Running with Docker

The supported docker image for this project is now the one hosted in the GitHub Packages repository.

You can get the "latest" image with the following command:

docker pull ghcr.io/sparky8512/starlink-grpc-tools

This will pull the image tagged as "latest". There should also be images for all recent tagged releases of this project, but those tend to be few and far between, so the most recent one will often be missing some important changes. See the package repository for a full list of tagged images.

You can run it with the following:

docker run --name='starlink-grpc-tools' ghcr.io/sparky8512/starlink-grpc-tools <script_name>.py <script args...>

For example, the following will print current status info and then exit:

docker run --name='starlink-grpc-tools' ghcr.io/sparky8512/starlink-grpc-tools dish_grpc_text.py -v status alert_detail

Of course, you can change the name to whatever you want instead, and use other docker run options, as appropriate.

The default command is dish_grpc_influx.py status alert_detail, which is really only useful if you pass in environment variables with user and database info, such as:

docker run --name='starlink-grpc-tools' -e INFLUXDB_HOST={InfluxDB Hostname} \
    -e INFLUXDB_PORT={Port, 8086 usually} \
    -e INFLUXDB_USER={Optional, InfluxDB Username} \
    -e INFLUXDB_PWD={Optional, InfluxDB Password} \
    -e INFLUXDB_DB={Pre-created DB name, starlinkstats works well} \
    ghcr.io/sparky8512/starlink-grpc-tools

When running in the background, you will probably want to specify a -t script option, to run in a loop, otherwise it will exit right away and leave an inactive container. For example:

docker run -d -t --name='starlink-grpc-tools' -e INFLUXDB_HOST={InfluxDB Hostname} \
    -e INFLUXDB_PORT={Port, 8086 usually} \
    -e INFLUXDB_USER={Optional, InfluxDB Username} \
    -e INFLUXDB_PWD={Optional, InfluxDB Password} \
    -e INFLUXDB_DB={Pre-created DB name, starlinkstats works well} \
    ghcr.io/sparky8512/starlink-grpc-tools -v -t 60 status alert_detail

The -t option to docker run will prevent Python from buffering the script's standard output and can be omitted if you don't care about seeing the verbose output in the container logs as soon as it is printed.

If there is some problem with accessing the image from the GitHub Packages repository, there is also an image available on Docker Hub, which can be accessed as neurocis/starlink-grpc-tools, but note that that image may not be as up to date with changes as the supported one.

Running with SystemD

To run e.g. the dish_grpc_influx2 script via SystemD the following steps are an option. Commands here should work for debian / ubuntu based distribution

sudo apt instlall python3-venv
cd /opt/
sudo mkdir starlink-grpc-tool
sudo chown <your non-root user>
git clone <git url>
cd starlink-grpc-tool
python3 -m venv venv
source venv/bin/activate.sh
pip3 install -r requirements.txt
sudo cp systemd/starlink-influx2.service /etc/systemd/starlink-influx2.service
sudo <your favorite editor> /etc/systemd/system/starlink-influx2.service
# Set influx url, token, bucket and org
sudo systemctl enable starlink-influx2
sudo systemctl start starlink-influx2

Dashboards

Several users have built dashboards for displaying data collected by the scripts in this project. Information on those can be found in this Wiki article. If you have one you would like to add, please feel free to edit the Wiki page to do so.

Note that feeding a dashboard will likely need the -t script option to dish_grpc_influx.py in order to collect status and/or history information periodically.

To Be Done (Maybe, but Probably Not)

The Wiki for this GitHub project has a little more information, and was originally planned to have more detail on some aspects of the history data, but that's mostly been obsoleted by changes to the gRPC service. It still may be updated some day with more use case examples or other information. In the mean time, it is configured as editable by anyone with a GitHub login, so if you have relevant content you believe to be useful, feel free to add it.

No further data collection functionality is planned at this time. If there's something you'd like to see added, please feel free to open a feature request issue. Bear in mind, though, that functionality will be limited to that which the Starlink gRPC services support. In general, those services are limited to what is required by the Starlink app, so unless the app has some related feature, it is unlikely the gRPC services will be sufficient to implement it in these tools.

ChuckTSI's Better Than Nothing Web Interface uses grpcurl and PHP to provide a spiffy web UI for some of the same data this project works on.

starlink-cli is another command line tool for interacting with the Starlink gRPC services, including the one on the Starlink router, in case Go is more your thing.