Proivdes data log data structure for FRAM, EPROM chip with functions to read chip and send back on serial data string.

Dependencies:   W25Q80BV multi-serial-command-listener

Dependents:   xj-data-log-test-and-example

Data Logging Data structure

Both Read and write seem to be working fine but testing has been limited.

Motivation

I needed a flexible data log structure that could tolerate evolving data structures as I discovered more things that needed to be measured. I also wanted something that is mostly human readable while remaining sufficiently concise to make efficient use of expensive storage resources.

I found it challenging to track everything needed to perform after the fact analysis we need to improve our state machine. In addition what I wanted to measure changed with time and I needed a robust way to log this data so we could analyze it latter. without breaking or converting all the old data. A self describing data format like JSON or XML would work but FRAM is expensive so I wanted something flexible but still concise.

I am working on A2WH which is a electronic controller for a sophisticated product that balances many sensors, battery charging from photo voltaic panels, controlling speed of many different fans, humidity and environmental data. Our main challenge is we never have enough battery power to run everything so we have to make decisions about what to run in an effort to produce the maximum amount of water from the available solar power resource. Our 2nd challenge is that balancing system actions such as increasing or decreasing fan speeds is driven by a complex internal prediction model that attempts balance many competing thermodynamic requirements. To get all this right requires substantial after the fact analysis and that requires logging a large amount of evolving data.

Design Notes

See: data-log-read.me.txt in the same project

Sample Use and Basic Test

Serial Command Interface

COMMANDS
  readall= send entire contents of log
  readlast 999
     999 = number of bytes from tail of log to retrieve
  tread 333 444
     333 = starting offset to start reading log
     444 = number of bytes to retrieve from log
  erase = erase log and start a new one
  help  = display this help

Other Chips

For legacy reasons I am using the library for "W25Q80BV.h" simply because I started with it. The actual FRAM chip I am using is 2 MBit FRAM MB85RS2MTPH-G-JNE I also tested it with SRAM 23LCV1024-I/P

Simplifying Design Decision

I made a simplifying assumption that every-time we generate a log entry I record the offset of the next write at a specific location in the chip. This works and is fast but it causes lots of updates against a single location. I prefer FRAM because this would rapidly fatigue FLASH chips like the W25Q80BV. Storing this pointer data in the CPU has the same fatigue problem.

Another other option would be to store this offset and our other critical configuration data in the clock chip but it is susceptible to loosing power and loosing this critical data.

One reason I don't log directly to the micro-sd is for the same fatigue problem but it is mostly for power management.

The FRAM chip provides adequate durability and data retention through power outage. The power outage retention is critical because the A2WH systems can be buried under feet of snow in the winter and solar panels do not provide much recharge under that condition.

One design option I have considered but not yet implemented is using a much smaller FRAM chip critical configuration data and rapid update data and then log directly to a larger and less expensive FLASH chip .

Journaling to micro-SD

I latter decided to add features to allow after the fact copying of the data to micro-sd cards to obtain larger log storage without soldering in more chips. I found the micro-sd consume quite a lot of power so I still want to log direct to the FRAM then copy to the micro-sd when I have surplus power available. Still thinking about consolidation tactics to allow re-use of FRAM after the data has been copied ot micro-sd.

Future

  • Support fast indexing by date to only pull back log entries between two dates.
  • Record most recent record headers for each record types where they are fast to access so we can send them with the data when only sending back portions of the data.
  • Support wrap around use of data log to re-use storage on chip.
  • Copy Data to micro SD card and consolidate FRAM chip for re-use.

License

By Joseph Ellsworth CTO of A2WH Take a look at A2WH.com Producing Water from Air using Solar Energy March-2016 License: https://developer.mbed.org/handbook/MIT-Licence Please contact us http://a2wh.com for help with custom design projects.

Committer:
joeata2wh
Date:
Wed Apr 13 04:15:43 2016 +0000
Revision:
11:bf816d33be80
Added Python parser to convert DLOG format captured from readall command in serial port into CSV format for use in excel an R.

Who changed what in which revision?

UserRevisionLine numberNew contents of line
joeata2wh 11:bf816d33be80 1 """
joeata2wh 11:bf816d33be80 2 Parse DLOG format from chip into something easier to work with on the PC
joeata2wh 11:bf816d33be80 3 Combines the CPU ID, Date, Time into single records and optionally adjusts
joeata2wh 11:bf816d33be80 4 the date time for local time offset to make easier to read.
joeata2wh 11:bf816d33be80 5
joeata2wh 11:bf816d33be80 6 Optionally save as CSV, TSV, or Elastic Search data structure to make secondary
joeata2wh 11:bf816d33be80 7 analysis easier.
joeata2wh 11:bf816d33be80 8
joeata2wh 11:bf816d33be80 9 NOTE: Use the excell feature to wrap text on field names and freeze payne to
joeata2wh 11:bf816d33be80 10 make much easier to use.
joeata2wh 11:bf816d33be80 11
joeata2wh 11:bf816d33be80 12 Assumes you have used something like the terraterm file capture to read the
joeata2wh 11:bf816d33be80 13 entire log contents from from the chip using the "readall" command.
joeata2wh 11:bf816d33be80 14
joeata2wh 11:bf816d33be80 15 Due to way we read the log it is fairly likely that we will re-read the same item
joeata2wh 11:bf816d33be80 16 multiple times for the same time stamp so we de-dup them. This is because I quite
joeata2wh 11:bf816d33be80 17 often issue a readall command and fail to issue an erase after that occurs. This causes
joeata2wh 11:bf816d33be80 18 the same log entries to occur in multiple capture files. This utility keeps only the
joeata2wh 11:bf816d33be80 19 most recent version located.
joeata2wh 11:bf816d33be80 20
joeata2wh 11:bf816d33be80 21 Sometimes it is easier and more space wise to log a series of smaller records at
joeata2wh 11:bf816d33be80 22 different time intervals but you need a wholisic view to get an idea of complete
joeata2wh 11:bf816d33be80 23 system state. The mergeLast feature allows the system to merge the most recent
joeata2wh 11:bf816d33be80 24 record of different log types into the current record providing a larger virtual
joeata2wh 11:bf816d33be80 25 records in the output CSV.
joeata2wh 11:bf816d33be80 26
joeata2wh 11:bf816d33be80 27 We know that we will add fields and record types over time so we accomodate this by
joeata2wh 11:bf816d33be80 28 converting all records to a name indexed version then when we save them we can look up
joeata2wh 11:bf816d33be80 29 the fields even if some of the older records do not contain all the fields.
joeata2wh 11:bf816d33be80 30
joeata2wh 11:bf816d33be80 31 #TODO: Detect type .T fields and parse them a time stamps and adjust to local time.
joeata2wh 11:bf816d33be80 32 Otherwise they are difficult to read in excel.
joeata2wh 11:bf816d33be80 33
joeata2wh 11:bf816d33be80 34 #TODO: Consider logging all time fields in form of hh:mm:ss it will not use much more
joeata2wh 11:bf816d33be80 35 space the the unix number and is much easier to read. Have not done it yet because sometimes
joeata2wh 11:bf816d33be80 36 these go back across a day boundary.
joeata2wh 11:bf816d33be80 37
joeata2wh 11:bf816d33be80 38 """
joeata2wh 11:bf816d33be80 39 import re
joeata2wh 11:bf816d33be80 40 import json
joeata2wh 11:bf816d33be80 41 import datetime
joeata2wh 11:bf816d33be80 42 import time
joeata2wh 11:bf816d33be80 43 from datetime import tzinfo, timedelta, datetime
joeata2wh 11:bf816d33be80 44 import csv
joeata2wh 11:bf816d33be80 45 import glob
joeata2wh 11:bf816d33be80 46
joeata2wh 11:bf816d33be80 47 # match time hh:mm:ss followed by Space followed by label followed by tab.
joeata2wh 11:bf816d33be80 48 logPref = re.compile(r"\d\d\:\d\d\:\d\d\s\w{1,10}\t")
joeata2wh 11:bf816d33be80 49
joeata2wh 11:bf816d33be80 50 # Grouping pattern to allow splitting log out into separate parts
joeata2wh 11:bf816d33be80 51 parsPat = re.compile(r"(\d\d)\:(\d\d)\:(\d?\.*\d*)\s(\w{1,10})\t(.*)")
joeata2wh 11:bf816d33be80 52 dateGrpPat = re.compile(r"(\d\d\d\d).*(\d\d).*(\d\d)")
joeata2wh 11:bf816d33be80 53
joeata2wh 11:bf816d33be80 54 localTimeAdj = timedelta(hours=-7.0) # FOR PACIFIC TIME # hours to add to current time for GMT adjustment
joeata2wh 11:bf816d33be80 55
joeata2wh 11:bf816d33be80 56 currDate = "01/01/01"
joeata2wh 11:bf816d33be80 57 currYear = 1
joeata2wh 11:bf816d33be80 58 currMon = 1
joeata2wh 11:bf816d33be80 59 currDay = 1
joeata2wh 11:bf816d33be80 60 ActiveCPUID = None
joeata2wh 11:bf816d33be80 61 # Every time we see a new header type we create a new row for it.
joeata2wh 11:bf816d33be80 62 # so we can calculate the field positions relative to field names.
joeata2wh 11:bf816d33be80 63 # we know that fields will be added and inserted over time so we
joeata2wh 11:bf816d33be80 64 # always have to process the most recent log relative to the most
joeata2wh 11:bf816d33be80 65 # recent defenition for that field.
joeata2wh 11:bf816d33be80 66 activeHeaders = {}
joeata2wh 11:bf816d33be80 67 recsByType = {} # contians a array of records with field names indexed by type
joeata2wh 11:bf816d33be80 68 fldTypesByType = {} # contains a array of conversion functions called when parsing input
joeata2wh 11:bf816d33be80 69 recFldPositions = {} # contains a array of fld names to allow fldName from pos lookup
joeata2wh 11:bf816d33be80 70 lastRecByType = {}
joeata2wh 11:bf816d33be80 71 mergeLast = {}
joeata2wh 11:bf816d33be80 72
joeata2wh 11:bf816d33be80 73 def parse(fiName):
joeata2wh 11:bf816d33be80 74 global ActiveCPUID, currDate, currYear, currMon, currDay, localTimeAdj, mergeLast
joeata2wh 11:bf816d33be80 75 print "parse ", fiName
joeata2wh 11:bf816d33be80 76 f = open(fiName)
joeata2wh 11:bf816d33be80 77 print "mergeLast=", mergeLast
joeata2wh 11:bf816d33be80 78 for aline in f:
joeata2wh 11:bf816d33be80 79 #print "aline=", aline
joeata2wh 11:bf816d33be80 80 rm = re.match(logPref, aline)
joeata2wh 11:bf816d33be80 81 if (rm != None):
joeata2wh 11:bf816d33be80 82 #print "rm=", rm
joeata2wh 11:bf816d33be80 83 po = re.split(parsPat, aline)
joeata2wh 11:bf816d33be80 84 #print "po=", po
joeata2wh 11:bf816d33be80 85 hour = int(po[1])
joeata2wh 11:bf816d33be80 86 min = int(po[2])
joeata2wh 11:bf816d33be80 87 sec = float(po[3])
joeata2wh 11:bf816d33be80 88 tag = po[4].strip().upper()
joeata2wh 11:bf816d33be80 89 data = po[5].strip()
joeata2wh 11:bf816d33be80 90 #print "hour=", hour, " min=", min, "sec=", sec, "tag=", tag, "data=", data
joeata2wh 11:bf816d33be80 91
joeata2wh 11:bf816d33be80 92 if tag == "DATE":
joeata2wh 11:bf816d33be80 93 tarr = re.split(dateGrpPat,data)
joeata2wh 11:bf816d33be80 94 #print "tarr=", tarr
joeata2wh 11:bf816d33be80 95 currYear = int(tarr[1])
joeata2wh 11:bf816d33be80 96 currMon = int(tarr[2])
joeata2wh 11:bf816d33be80 97 currDay = int(tarr[3])
joeata2wh 11:bf816d33be80 98 print "DATE tarr=", tarr, " currYear=", currYear, "currMon=", currMon, "currDay=", currDay
joeata2wh 11:bf816d33be80 99
joeata2wh 11:bf816d33be80 100 elif tag == "HEAD":
joeata2wh 11:bf816d33be80 101 # Save our most recent defenition
joeata2wh 11:bf816d33be80 102 tarr = data.split("\t",1)
joeata2wh 11:bf816d33be80 103 #print("tarr=", tarr)
joeata2wh 11:bf816d33be80 104 recName = tarr[0].upper()
joeata2wh 11:bf816d33be80 105 if len(tarr) < 2:
joeata2wh 11:bf816d33be80 106 continue
joeata2wh 11:bf816d33be80 107 tarr = tarr[1].split(",")
joeata2wh 11:bf816d33be80 108 ndx = 0
joeata2wh 11:bf816d33be80 109 recMap = {}
joeata2wh 11:bf816d33be80 110 fldTypes = {}
joeata2wh 11:bf816d33be80 111 fldPositions = []
joeata2wh 11:bf816d33be80 112 activeHeaders[recName] = recMap
joeata2wh 11:bf816d33be80 113 fldTypesByType[recName] = fldTypes
joeata2wh 11:bf816d33be80 114 recFldPositions[recName] = fldPositions
joeata2wh 11:bf816d33be80 115 for fname in tarr:
joeata2wh 11:bf816d33be80 116 fname = fname.strip()
joeata2wh 11:bf816d33be80 117 recMap[ndx] = fname
joeata2wh 11:bf816d33be80 118 # Figure out type hint if available
joeata2wh 11:bf816d33be80 119 fsega = fname.split(".")
joeata2wh 11:bf816d33be80 120 fldPositions.append(fname)
joeata2wh 11:bf816d33be80 121 ftype = fsega[-1]
joeata2wh 11:bf816d33be80 122 if ftype == "f":
joeata2wh 11:bf816d33be80 123 fldTypes[ndx] = float
joeata2wh 11:bf816d33be80 124 elif ftype == "l":
joeata2wh 11:bf816d33be80 125 fldTypes[ndx] = long
joeata2wh 11:bf816d33be80 126 elif ftype == "i":
joeata2wh 11:bf816d33be80 127 fldTypes[ndx] = int
joeata2wh 11:bf816d33be80 128 else:
joeata2wh 11:bf816d33be80 129 ftype = str
joeata2wh 11:bf816d33be80 130 # set up for next field
joeata2wh 11:bf816d33be80 131 ndx = ndx + 1
joeata2wh 11:bf816d33be80 132
joeata2wh 11:bf816d33be80 133
joeata2wh 11:bf816d33be80 134 else:
joeata2wh 11:bf816d33be80 135 recName = tag
joeata2wh 11:bf816d33be80 136 arec = {};
joeata2wh 11:bf816d33be80 137 recArr = {}
joeata2wh 11:bf816d33be80 138 if (recsByType.has_key(recName)):
joeata2wh 11:bf816d33be80 139 recArr = recsByType[recName]
joeata2wh 11:bf816d33be80 140 else:
joeata2wh 11:bf816d33be80 141 recsByType[recName] = recArr
joeata2wh 11:bf816d33be80 142 flds = data.split(",")
joeata2wh 11:bf816d33be80 143
joeata2wh 11:bf816d33be80 144
joeata2wh 11:bf816d33be80 145 recDef = {}
joeata2wh 11:bf816d33be80 146 if activeHeaders.has_key(recName):
joeata2wh 11:bf816d33be80 147 recDef = activeHeaders[recName]
joeata2wh 11:bf816d33be80 148
joeata2wh 11:bf816d33be80 149 fldTypes = {}
joeata2wh 11:bf816d33be80 150 if fldTypesByType.has_key(recName):
joeata2wh 11:bf816d33be80 151 fldTypes = fldTypesByType[recName]
joeata2wh 11:bf816d33be80 152
joeata2wh 11:bf816d33be80 153
joeata2wh 11:bf816d33be80 154 #print "recName=", recName, "recDef=", recDef, " JSON=", json.dumps(recDef)
joeata2wh 11:bf816d33be80 155 if recName == "boot" and len(flds) > 2:
joeata2wh 11:bf816d33be80 156 ActiveCPUID=flds[0]
joeata2wh 11:bf816d33be80 157
joeata2wh 11:bf816d33be80 158
joeata2wh 11:bf816d33be80 159 # Merge that last of occurence of defined set of records
joeata2wh 11:bf816d33be80 160 # into this record. EG: If we are logging state we probably
joeata2wh 11:bf816d33be80 161 # also need the last sensor reading.
joeata2wh 11:bf816d33be80 162 #print "rec_name=", recName
joeata2wh 11:bf816d33be80 163 if mergeLast.has_key(recName):
joeata2wh 11:bf816d33be80 164 for mergeRecName in mergeLast[recName]:
joeata2wh 11:bf816d33be80 165 if lastRecByType.has_key(mergeRecName):
joeata2wh 11:bf816d33be80 166 mergeRec = lastRecByType[mergeRecName]
joeata2wh 11:bf816d33be80 167 #print "mergeRec=", mergeRec
joeata2wh 11:bf816d33be80 168 for mergeFldName in mergeRec:
joeata2wh 11:bf816d33be80 169 #print "mergeRecName=", mergeRecName, "mergeFldName=", mergeFldName
joeata2wh 11:bf816d33be80 170 arec[mergeFldName] = mergeRec[mergeFldName]
joeata2wh 11:bf816d33be80 171
joeata2wh 11:bf816d33be80 172
joeata2wh 11:bf816d33be80 173 if ActiveCPUID != None:
joeata2wh 11:bf816d33be80 174 arec["cid"] = ActiveCPUID
joeata2wh 11:bf816d33be80 175 # Compute a Local Adjusted Time for this log entry
joeata2wh 11:bf816d33be80 176
joeata2wh 11:bf816d33be80 177 fractSec = int((sec - int(sec)) * 1000000) # convert fractional seconds to micro seconds
joeata2wh 11:bf816d33be80 178 #ltime = datetime(currYear, currMon, currDay, hour, min, int(sec), fractSec)
joeata2wh 11:bf816d33be80 179 ltime = datetime(currYear, currMon, currDay, hour, min, int(sec), fractSec)
joeata2wh 11:bf816d33be80 180 adjtime = ltime + localTimeAdj
joeata2wh 11:bf816d33be80 181 asiso = adjtime.isoformat()
joeata2wh 11:bf816d33be80 182 arec["time"] = asiso
joeata2wh 11:bf816d33be80 183 #print "ltime=", ltime, ltime.isoformat(), " adjtime=", adjtime, " iso=", asiso
joeata2wh 11:bf816d33be80 184
joeata2wh 11:bf816d33be80 185
joeata2wh 11:bf816d33be80 186 # Update the record with the fields
joeata2wh 11:bf816d33be80 187 # Do this after the merge because we want any
joeata2wh 11:bf816d33be80 188 # fields with same name to take precendence from
joeata2wh 11:bf816d33be80 189 # the merge to record.
joeata2wh 11:bf816d33be80 190 fndx = 0
joeata2wh 11:bf816d33be80 191 for afld in flds:
joeata2wh 11:bf816d33be80 192 fldName = "x" + str(fndx)
joeata2wh 11:bf816d33be80 193 if recDef.has_key(fndx):
joeata2wh 11:bf816d33be80 194 fldName = recDef[fndx]
joeata2wh 11:bf816d33be80 195
joeata2wh 11:bf816d33be80 196 if fldTypes.has_key(fndx):
joeata2wh 11:bf816d33be80 197 convFun = fldTypes[fndx]
joeata2wh 11:bf816d33be80 198 try:
joeata2wh 11:bf816d33be80 199 if convFun != None and convFun != str:
joeata2wh 11:bf816d33be80 200 afld = convFun(afld)
joeata2wh 11:bf816d33be80 201 except:
joeata2wh 11:bf816d33be80 202 # just in case the conversion fails fallback
joeata2wh 11:bf816d33be80 203 # and treat it as a string
joeata2wh 11:bf816d33be80 204 afld = recDef[fndx]
joeata2wh 11:bf816d33be80 205
joeata2wh 11:bf816d33be80 206 arec[fldName] = afld
joeata2wh 11:bf816d33be80 207 #print " fndx=", fndx, " fname=", fldName, "val", afld, " type=",type(afld)
joeata2wh 11:bf816d33be80 208 fndx = fndx + 1
joeata2wh 11:bf816d33be80 209 #// keeps most recent rec for this time stamp for this record type
joeata2wh 11:bf816d33be80 210 recArr[asiso] = arec
joeata2wh 11:bf816d33be80 211 lastRecByType[recName] = arec
joeata2wh 11:bf816d33be80 212 #print "REC AS JSON=", json.dumps(arec)
joeata2wh 11:bf816d33be80 213
joeata2wh 11:bf816d33be80 214
joeata2wh 11:bf816d33be80 215 # Merge Records with identical time stamps
joeata2wh 11:bf816d33be80 216 # and different types
joeata2wh 11:bf816d33be80 217 def mergeRecords(baseType, auxType):
joeata2wh 11:bf816d33be80 218 recs = recsByType[recType];
joeata2wh 11:bf816d33be80 219 auxRecs = recsByType[auxType]
joeata2wh 11:bf816d33be80 220 reckeys = recs.keys()
joeata2wh 11:bf816d33be80 221 reckeys.sort()
joeata2wh 11:bf816d33be80 222 for akey in reckeys:
joeata2wh 11:bf816d33be80 223 brec = recs[akey]
joeata2wh 11:bf816d33be80 224 if auxRecs.has_key[akey]:
joeata2wh 11:bf816d33be80 225 auxrec = auxRecs[akey]
joeata2wh 11:bf816d33be80 226 for fname,fval in auxrec:
joeata2wh 11:bf816d33be80 227 brec[fname] = auxrec[fval]
joeata2wh 11:bf816d33be80 228
joeata2wh 11:bf816d33be80 229
joeata2wh 11:bf816d33be80 230
joeata2wh 11:bf816d33be80 231 def toCSVMerged(baseFiName, baseType):
joeata2wh 11:bf816d33be80 232 pass
joeata2wh 11:bf816d33be80 233
joeata2wh 11:bf816d33be80 234
joeata2wh 11:bf816d33be80 235 # Generate a CSV file for every record type ordered by dtime
joeata2wh 11:bf816d33be80 236 # with dtime adjusted for local time.
joeata2wh 11:bf816d33be80 237 def saveAsCSV(baseFiName):
joeata2wh 11:bf816d33be80 238 recTypes = recsByType.keys();
joeata2wh 11:bf816d33be80 239 for recType in recTypes:
joeata2wh 11:bf816d33be80 240 fldNamesUsed = {}
joeata2wh 11:bf816d33be80 241 outFiName = baseFiName + "." + recType + ".csv"
joeata2wh 11:bf816d33be80 242
joeata2wh 11:bf816d33be80 243
joeata2wh 11:bf816d33be80 244 fldNames = recFldPositions[recType];
joeata2wh 11:bf816d33be80 245 outFldNames = []
joeata2wh 11:bf816d33be80 246 for fldName in fldNames:
joeata2wh 11:bf816d33be80 247 outFldNames.append(fldName)
joeata2wh 11:bf816d33be80 248 fldNamesUsed[fldName] = 1
joeata2wh 11:bf816d33be80 249
joeata2wh 11:bf816d33be80 250 # merge in additional field names if needed for merged records mergeName in mergeLast[recType]:
joeata2wh 11:bf816d33be80 251 if mergeLast.has_key(recType):
joeata2wh 11:bf816d33be80 252 for mergeRecName in mergeLast[recType]:
joeata2wh 11:bf816d33be80 253 mergeFlds = recFldPositions[mergeRecName]
joeata2wh 11:bf816d33be80 254 for mergeFldName in mergeFlds:
joeata2wh 11:bf816d33be80 255 if not(fldNamesUsed.has_key(mergeFldName)):
joeata2wh 11:bf816d33be80 256 outFldNames.append(mergeFldName)
joeata2wh 11:bf816d33be80 257 fldNamesUsed[mergeFldName] = 1
joeata2wh 11:bf816d33be80 258
joeata2wh 11:bf816d33be80 259
joeata2wh 11:bf816d33be80 260 #print "fldnames=", fldnames
joeata2wh 11:bf816d33be80 261 fout = open(outFiName, "w")
joeata2wh 11:bf816d33be80 262 fout.write("time,id,")
joeata2wh 11:bf816d33be80 263 fout.write(",".join(outFldNames))
joeata2wh 11:bf816d33be80 264 fout.write("\n")
joeata2wh 11:bf816d33be80 265 recs = recArr = recsByType[recType];
joeata2wh 11:bf816d33be80 266 reckeys = recs.keys()
joeata2wh 11:bf816d33be80 267 reckeys.sort()
joeata2wh 11:bf816d33be80 268 for akey in reckeys:
joeata2wh 11:bf816d33be80 269 arec = recs[akey]
joeata2wh 11:bf816d33be80 270 #print "areckey=", arec
joeata2wh 11:bf816d33be80 271 recOut = [];
joeata2wh 11:bf816d33be80 272 recOut.append(arec["time"])
joeata2wh 11:bf816d33be80 273 if arec.has_key("cid"):
joeata2wh 11:bf816d33be80 274 recOut.append(arec["cid"])
joeata2wh 11:bf816d33be80 275 else:
joeata2wh 11:bf816d33be80 276 recOut.append("")
joeata2wh 11:bf816d33be80 277 # merge fields will already be in the target record
joeata2wh 11:bf816d33be80 278 for fldName in outFldNames:
joeata2wh 11:bf816d33be80 279 #print "fldName=", fldName, " arec=", json.dumps(arec)
joeata2wh 11:bf816d33be80 280 if fldName == "time":
joeata2wh 11:bf816d33be80 281 continue
joeata2wh 11:bf816d33be80 282 if arec.has_key(fldName):
joeata2wh 11:bf816d33be80 283 fldval = arec[fldName]
joeata2wh 11:bf816d33be80 284 fldvalstr = str(fldval)
joeata2wh 11:bf816d33be80 285 recOut.append(fldvalstr)
joeata2wh 11:bf816d33be80 286 else:
joeata2wh 11:bf816d33be80 287 recOut.append("")
joeata2wh 11:bf816d33be80 288 recStr = ",".join(recOut)
joeata2wh 11:bf816d33be80 289 fout.write(recStr)
joeata2wh 11:bf816d33be80 290 fout.write("\n")
joeata2wh 11:bf816d33be80 291 fout.close()
joeata2wh 11:bf816d33be80 292
joeata2wh 11:bf816d33be80 293
joeata2wh 11:bf816d33be80 294
joeata2wh 11:bf816d33be80 295
joeata2wh 11:bf816d33be80 296 # TODO:
joeata2wh 11:bf816d33be80 297 def toMongo(baseFiName):
joeata2wh 11:bf816d33be80 298 pass
joeata2wh 11:bf816d33be80 299
joeata2wh 11:bf816d33be80 300 # TODO:
joeata2wh 11:bf816d33be80 301 def toElastic(baseFiName):
joeata2wh 11:bf816d33be80 302 recTypes = recsByType.keys();
joeata2wh 11:bf816d33be80 303 for recType in recTypes:
joeata2wh 11:bf816d33be80 304 outFiName = baseFiName + "." + recType + ".csv"
joeata2wh 11:bf816d33be80 305 fldnames = recFldPositions;
joeata2wh 11:bf816d33be80 306 fout = open(outFiName, "w")
joeata2wh 11:bf816d33be80 307 fout.write(",".join(fldnames))
joeata2wh 11:bf816d33be80 308 fout.write("\n")
joeata2wh 11:bf816d33be80 309 recs = recArr = recsByType[recType];
joeata2wh 11:bf816d33be80 310 reckeys = recs.keys()
joeata2wh 11:bf816d33be80 311 reckeys.sort()
joeata2wh 11:bf816d33be80 312 for akey in reckeys:
joeata2wh 11:bf816d33be80 313 arec = recs[akey]
joeata2wh 11:bf816d33be80 314 #print "areckey=", arec
joeata2wh 11:bf816d33be80 315 fout.write(json.dumps(arec))
joeata2wh 11:bf816d33be80 316 fout.write("\n")
joeata2wh 11:bf816d33be80 317 fout.close()
joeata2wh 11:bf816d33be80 318
joeata2wh 11:bf816d33be80 319
joeata2wh 11:bf816d33be80 320 mergeLast["STAT"] = ["SENS","NIGHTE","DAYE"] # instructs parser to merge the last sense record into the current
joeata2wh 11:bf816d33be80 321 # state record whenever it finds one.
joeata2wh 11:bf816d33be80 322
joeata2wh 11:bf816d33be80 323 localTimeAdj = timedelta(hours=-7.0) # FOR PACIFIC TIME # hours to add to current time for GMT adjustment
joeata2wh 11:bf816d33be80 324 parse("c:\\a2wh\\plant-unit-01\\a2wh-2016-04-11-1241.DLOG.TXT")
joeata2wh 11:bf816d33be80 325
joeata2wh 11:bf816d33be80 326 logFiles = glob.glob("c:\\a2wh\\plant-unit-01\\a2wh*.DLOG.TXT")
joeata2wh 11:bf816d33be80 327 for fiName in logFiles:
joeata2wh 11:bf816d33be80 328 parse(fiName)
joeata2wh 11:bf816d33be80 329
joeata2wh 11:bf816d33be80 330 saveAsCSV("c:\\a2wh\plant-unit-01\\testx2")
joeata2wh 11:bf816d33be80 331
joeata2wh 11:bf816d33be80 332
joeata2wh 11:bf816d33be80 333
joeata2wh 11:bf816d33be80 334
joeata2wh 11:bf816d33be80 335