The notes below cover options for programmatic data access to the 2.0 platform using the legacy v1 API mechanisms. It’s probably both beneficial and prudent for any new software implementations to use the v2 API in case the v1 API might get turned off at some future date (though there’s no sign currently of this being even suggested). See v2 API data access by programs for more details of the v2 API.

There are of course multiple options for viewing and charting data interactively in the 2.0 web interface, ie as viewed in any standard browser and including the option to download CSV files on command. These interactive options are not described here – they are well covered in other standard resources such as the Davis video introducing the 2.0 interface. What is covered here are the options for downloading data automatically into your own program rather than using the web browser interface.

Subscription level

Remember that the 2.0 platform has multiple subscription tiers, eg Basic/Free and then the paid-for Pro, Pro+ etc. The Basic/Free tier may not have access to the same data download options as Pro. Within the Pro levels, permitted data download frequency may vary and you may potentially need to pay a higher annual subscription to access data more frequently. It’s also worth remembering that even where data downloads are allowed with logger-based stations there will be a maximum of the most recent 10,240 archive records (and subject to change) available for download on the Free tier.

Data download options

We’re aware of four distinct download options for retrieving data programmatically from the 2.0 platform via the v1 API, as detailed further below. It’s worth remembering a few points of background to make full sense of these different options.

First, there are three main types of sensor data available from the 2.0 platform:

  • ‘Current conditions’ data: The latest single data record received from the upload device;
  • Hilow data: A rich assortment of high and low values for a range of parameters and over a range of time periods;
  • Summary data: Summary data records often with multiple records in a download block;

Terminology: A Summary data record is a summary of sensor data created at a defined time-point and logged. The classic example of a summary record is the Davis Weatherlink archive record. However, summary data from an EM station cannot fit into the classic WL archive record and so the term ‘summary data’ is used to refer to archive-type data in general, while ‘archive data’ is restricted specifically to the classic WL archive record as used for data from all Vue and VP2 stations.

There’s an important difference for data processing between the summary data and the current conditions or hilow data. The current & hilow downloads provide only one data value per parameter, which will be the latest available value for that parameter; there may well be many different parameters in the download, especially for stations with more complex sensor configurations, but each parameter will have just a single value. In contrast, the summary downloads will typically return multiple summary records per download corresponding to the successive archive time-points in the requested download period.

NB There is also a fourth category of station metadata available with the JSON and XML options listed below, but this is predominantly descriptive information about the station rather than actual sensor data and so we’re not going to describe this option in detail.

Second, the distinction between ‘current conditions’ and ‘summary’ data blurs somewhat for uploads from cell-based devices such as EM and Connect. For such devices, which upload just one single record at the plan interval (5 or 15 or 60 minutes), each successive record upload effectively becomes the next archive record and is stored as such in the 2.0 database. So the plan interval is effectively the archive interval for cell-based devices.

This contrasts with uploads from logger-based devices (eg USB via WLv6 or IP) where a ‘current conditions’ record is uploaded once per minute, but not added to the database. The logger is responsible for generating and storing the archive records at whatever archive interval is set within the logger and the latest batch of archive records is then uploaded from the logger once per hour.

This distinction has some practical implications, eg there is no point in seeking to retrieve archive data for logger-based devices more than once per hour – no new data will be seen at intermediate time points. The same probably applies to hilow data, though this isn’t known for sure.

Finally, remember that the download cannot include more data than the download format allows.So, for example, a ‘web download’ for an EM station can only ever be populated with sensor data from the ISS attached to the EM gateway – there is no mechanism for forcing other elements of non-ISS EM data into the archive record format.

The four data download methods are detailed below.

Download of current conditions and Hilow data in JSON format

This download method differs from the summary data downloads in that only a single value is returned for each sensor parameter, which will obviously be the latest value uploaded for that parameter. So any processing code only needs to cope with that single value rather than the multiple archive-type records that a summary download can generate.Also, new values will be available at the upload interval (for loggers) or the plan interval (for cell-based stations), ie usually much more often than the hourly interval that applies to summary downloads. That said, Davis strongly discourage polling for new data more often than every 10 minutes.

Current conditions and hilow data are combined in the same JSON object which is returned when the URL below is called:

This requires three arguments to be passed in the URL:

  • DID of the station whose data is being requested;
  • The password of the station owner’s account (as distinct from the user’s password on shared stations);
  • The apiToken is the token visible in the user’s account details on 2.0;

Note: It’s safest to limit the account password to letters A-Z (upper and lower case) and digits 0-9. Including certain symbols and punctuation marks in the password and hence in the URL may well be misinterpreted at the server and cause the JSON request to fail.

It’s not worth detailing the structure of the JSON object here partly because JSON is inherently self-describing but also because the structure will vary considerably depending on the station and on the sensors installed. The simple advice is to download a test JSON object for your development station and inspect its structure. That said, the structure is a little curious in places but no doubt every design element has a purpose and, provided the correct key is chosen there should be no problem whatsoever in accessing the JSON successfully.

One of the curiosities is that current conditions and hilow data are both contained in the same single JSON object. Many (but not all) of the current readings are contained in the root of the JSON object, while the hilow data is held in an inner object called ‘davis_current_observation’, which doesn’t seem entirely logical. The current readings seem limited to main (ie ISS) temperature, humidity, wind and pressure data. Other current reading are available like readings for solar (and presumably UV) and other supplementary sensors, but these are mixed in with the hilow data. Rainfall seems a particular oddity in that the only current value available is the cumulative rainfall since midnight.

Also unexpected is that several of the current readings have multiple keys with different units or formats, eg pressure is available as 3 keys – one each for mb, inches and as a string format (which is odd since JSON is intrinsically a string format, although it presumably reflects some native variable type in the platform code). But this clearly makes it easier to source a value in whatever units one might prefer.

Download of current conditions and Hilow data in XML format

This is essentially similar to the JSON option described immediately above, except that the data arrives as an XML tree rather than a JSON object. The URL for XML data is:


‘Web download’ for binary archive (summary) data

This is the classic method for downloading archive data from the platform. The ‘Web download’ protocol is as described on p36 of the Davis ‘Serial Communication Reference Manual’ (aka the Serial Tech Ref manual). Note that this is a 2-stage HTTP call – the first stage requests a block of metadata describing available data since a timestamp argument supplied in the initial HTTP call, while a second HTTP call returns the required data.

The data returned will be a block of binary archive records in Rev B format as described on p32 of the same manual. This block of binary records will then need iterating through, parsing each record appropriately and then processing the individual data values as required. Remember that this approach is primarily intended only for non-EM stations. (Actually, it can also be used with EM stations but will only retrieve data for the cabled ISS connected directly to the EM gateway and not for any other sensors on the EM station.)

Download of summary data in CSV format

This option also appears to be an automated way of performing the interactive CSV downloads that are available from the Data tab of the 2.0 interface and – subject to definitive confirmation – it appears that the CSV file format is identical between manual and programmatic downloads (so a manually-downloaded file could be used to help initial development of data extraction routines).

There is, however, an important difference between the interactive and automated methods. The interactive option allows a user-defined block of data to be downloaded which could extend to eg several days or weeks. In contrast, the automated option downloads one specific hourly file, ie with data covering a timespan of one hour only, as defined in the timestamp in the calling URL. It’s presumably expected that this download option will be used as part of a continuously running program, with a call made to the URL once per hour to download the latest block of CSV data and then the process it as required.

This option requires a URL call in the form of:

where the FILE_NAME.csv is built from four underscore-separated elements as UserKey_DID_DATE_24HRTIME.csv

The DATE_24HRTIME element is the UTC time when the data was processed, for example 2017_08_04_19_00 would provide data for the hour up to 1900 UTC August 4th, 2017.

A validly-constructed calling URL should return a complex multi-line CSV file. Individual lines are CRLF-terminated within the overall CSV file and then each line has numerous comma-delimited fields, depending on the number of sensors installed. There are six distinct header lines and then one data line for each individual time-point record within the one-hour file duration.

It’s not really worth describing the structure in greater detail because the structure will vary considerably from station to station depending on the station type and the number & type of sensors installed.The only viable approach for developers wishing to use this download option is to obtain a UserKey from Davis and to download an initial test CSV file. The structure of that file can then be analysed in detail and appropriate processing set up. Bear in mind that subsequent changes to the station’s sensor complement will change the CSV structure. One idea might be to use a CSV-to-JSON converter as part of the program design to allow for easier maintenance.

Last modified: Feb 26, 2021


Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment