therefore WU might be a suitable datasource for other applications.
Simple scraping WUnderground just for CurrentConditions is not difficult:
can be done with a rather simple lua-script operating on the output of an API-call like https://api.weather.com/v2/pws/observat ... yourApiKey
If you need additional info such as Sunrise/Sunset, it can be 'borrowed' from Domoticz.
For timing of the scraping-script, mind the quotum of 500 calls/day for the applicable WU-account!
More extended scraping of WUnderground for CurrentConditions & Forecast has been offered in form of a dzVents-Script
A suitable approach to extract data from WU is also presented as a PHP-script by the author of the socalled Saratoga-Template.
Almost meeting the desired setup, but:
- PHP-application demands webspace to run
- for the subsequent application of data not yet a ready PHP-script.
#1. Python-script, because it should also be used by non-Domoticz weather-enthousiasts having a Raspberry (or alike), but not necessarily Domoticz/dzVents, nor PHP&webspace
#2. reading Today's data from a WUnderground weatherstation via the equivalent of https://api.weather.com/v2/pws/observat ... yourApiKey
That yields a JSON-file like in the attachment [and that example-file only covers the time till approx. 10:00 in the morning!!!)
#3. scanning the data in the resulting JSON-file to get
a. Current/actual/latest data
b. Extremes (= high-values and low-values of this day after 00:00)
c. times that the Extremes occurred
#4. application of found values for functions, such as generation of a quasi-Cumulus-datafile as described in this thread.
Obviously, the characteristics of such function echo back to the requirements for extraction of information, meaning that most extracted values will be used.
As stated, the PHP-script mentioned above is good guidance for a general setup, and probably also example Python-scripts exist for each aspect, but did not yet find one package fitting in combination most above requirements .........
For #1, #2 and 4# have bits & pieces available for recycling.
However, #3. is 'critical kernel' when trying to apply the WU JSON-file for Today's Data:
aspect a. requires dissect of the latest section (or 'bin')
=> find number of latest bin, and extract&process the desired data from that latest bin as current/actual values.
[simpler, faster and probably more valid extraction of current/actual data is use of (example)url-call for CurrentData and extraction of data, but it implies one more WU API-call per cycle of the script]
aspect b. starts easy, because initial values are in the first section/'bin0' of the json-file
=> extract the desired data from this bin 0 to serve as initial reference for further checking of Extremes.
Subsequently scan the other sections/'bin's and register the higher values respectively the lower values.
aspect c. has the more difficult function that time-stamps related to extreme values are not included
=> times of occurrence have to be derived from the section/bin in which the related highest value or lowest value occurred
=> if finding a new Extreme, register the time of occurrence = epoch time of the bin in which found
Somebody with a examples/hints for the mentioned aspects of #3?