1. 25 May, 2019 3 commits
  2. 24 May, 2019 2 commits
  3. 23 May, 2019 11 commits
  4. 22 May, 2019 6 commits
  5. 21 May, 2019 6 commits
  6. 17 May, 2019 2 commits
  7. 16 May, 2019 2 commits
    • Marshall Garey's avatar
      Fix archive loading events. · 0d0f9deb
      Marshall Garey authored
      There was a syntax error in the mysql for inserting the event records
      into the event table caused by commit 3d61b6aa. The syntax error was
      a semicolon in the middle of the query, for example:
      
      insert into "voyager_event_table" (time_start, time_end, node_name,
      cluster_nodes, reason, reason_uid, state, tres) values ('1538669453',
      '1539298628', 'v1', '', 'cold-start', '1017', '0',
      '1=8,2=4000,5=8,1001=4,1002=1');, (<... another record>);, ...
      
      Bug 7025.
      0d0f9deb
    • Marshall Garey's avatar
      Fix regression caused by 34e9d41b. · c77d7895
      Marshall Garey authored
      This commit caused loading usage table archive files to fail.
      Specifically, wckey and assoc hourly/daily/monthly usage tables and the
      cluster usage tables archive files would all fail to load.
      
      Bug 7025.
      c77d7895
  8. 15 May, 2019 2 commits
  9. 13 May, 2019 1 commit
  10. 10 May, 2019 5 commits
    • Marshall Garey's avatar
      Document behavior of duplicate archive file names. · 7e7fd1bc
      Marshall Garey authored
      Bug 6033.
      7e7fd1bc
    • Marshall Garey's avatar
      Prevent infinite loop if 0 records are archived. · df5f748d
      Marshall Garey authored
      If _get_oldest_record() finds a record to archive/purge, then archive
      should always archive at least one record. If for whatever reason it
      fails to archive any records (_archive_table() returns a 0), then we
      don't want call continue, but want to return an error. Calling continue
      to go back to the beginning of the while loop would result in an
      infinite loop.
      
      Bug 6033.
      df5f748d
    • Marshall Garey's avatar
      Make archive job sql query consistent with purge. · 90471db8
      Marshall Garey authored
      Bug 6033.
      90471db8
    • Marshall Garey's avatar
      Only archive 50k records at a time. · ddd49896
      Marshall Garey authored
      Trying to archive too many records at once can result in archive files
      that are too big to read or even too big to be written. Only archive 50k
      records at a time, like we only purge 50k records at a time.
      
      Bug 6033.
      ddd49896
    • Marshall Garey's avatar
      Handle duplicate archive file names. · 1e234c3d
      Marshall Garey authored
      The time period of the archive file currently depends on submit or start
      time and whether the purge period is in hours, days, or months.
      Previously, if the archive file name already exists, we would overwrite
      the old archive file with the assumption that these are duplicate
      records being archived after an archive load. However, that could result
      in lost records in a couple of ways:
      
        * If there were runaway jobs that were part of an old archive file's
        time period and are later fixed and then purged, the old file would
        be overwritten.
        * If jobs or steps are purged but there are still jobs or steps in
        that time period that are pending or running, the pending or running
        jobs and steps won't be purged. When they finish and are purged, the
        old file would be overwritten.
      
      Instead of overwriting the old file, we append a number to the file name
      to create a new file. This will also be important in an upcoming commit.
      
      Bug 6033.
      1e234c3d