Difference: DataAccessSchemeBeamtimeAprMay07 (1 vs. 11)

Revision 11
12 Apr 2007 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"

Data Access Scheme for Beamtime Apr 2007

The following picture illustrates the data flow for the april 2007 beamtime.
Changed:
<
<
data flow for april 2007 beamtime
>
>
data flow for april 2007 beamtime
 

This scheme on one hand provides access for all relevant tasks of data analysis on reasonable time-scales and on the other hand protects the crucial event-building and archiving task by forbidding direct data access to the event builder.
Line: 108 to 110
 
META FILEATTACHMENT attr="" comment="PDF of data flow" date="1146756339" name="data-access.pdf" path="data-access.pdf" size="560939" user="PeterZumbruch" version="1.2"
META FILEATTACHMENT attr="" comment="data flow for april 2007 beamtime" date="1176131697" name="dataflow.gif" path="dataflow.gif" size="23666" user="SergeyYurevich" version="1.1"
META FILEATTACHMENT attr="" comment="data flow for april 2007 beamtime" date="1176132948" name="dataflow_small.gif" path="dataflow_small.gif" size="18474" user="SergeyYurevich" version="1.1"
Added:
>
>
META FILEATTACHMENT attr="" comment="data flow for april 2007 beamtime" date="1176383757" name="dataflow_small2.gif" path="dataflow_small2.gif" size="19859" user="SergeyYurevich" version="1.1"
 
META TOPICMOVED by="SergeyYurevich" date="1176202687" from="DaqSlowControl.DataAccessSchemeBeamtimeAprMay06" to="DaqSlowControl.DataAccessSchemeBeamtimeAprMay07"
Revision 10
11 Apr 2007 - Main.HadesDaq
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"

Line: 43 to 43
 
  • The QA File Server (lxg0447) retrieves via connect_res continously those small 2MB files to its local disk (/data.local2).
    • The process controlling the continous retrival runs on lxg0447 as user hades-qa and is described in the Howto Section under Start/Stop connect_res
    • This snapshort archive is exported as /misc/scratch2.lxg0447/qa/hld-snapshot-archive
Added:
>
>
    • a cron script /u/hades-qa/cron/clean_datalocal2.pl checks the snapshot archive two times a day and removes all files which are more than 1 day old if a total amount of files is larger than 500.
 

  • Those small files are read by the Online Monitoring Software (Go4), analyzed and the QA plots are displayed on lxg0411.
    • The Go4 code uses a new HLD-Source of hydra looking for the latest files available (J.Markert)
Revision 9
10 Apr 2007 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"

Line: 34 to 34
 
  • The Event Builder writes the files to its local disks.
  • The archiver (HADES Archivist) reads periodically data from those data disk and archives them to the tape robot archives
  • This is the only process which is allowed to read the data disks of the eventbuilder directly.
Changed:
<
<

>
>
  • The archivist periodically checks also /data/hadeb04/apr07 imported from hadeb04. If lxhadesdaq local disk (directory /data/lxhadesdaq/apr07) fails, the Event Builder should be restarted with a new path "-o /data/hadeb04/apr07". Then the data archiving will be continued from /data/hadeb04/apr07.
 

Go4/QA online monitor: reduced but fast data flow for online monitoring

  • The Event Builder writes blockwise 1000 events into a small file to /ramdisk/res on a 12 GB RAM Disk on the Event Server (hadeb06) and then waits for 9000 events to write the next files, to get a reduced 1:10 data stream.
    • Those files have the same name as the large file written by the EventBuilder but extended by a continous number:
Line: 107 to 107
 
META FILEATTACHMENT attr="" comment="PDF of data flow" date="1146756339" name="data-access.pdf" path="data-access.pdf" size="560939" user="PeterZumbruch" version="1.2"
META FILEATTACHMENT attr="" comment="data flow for april 2007 beamtime" date="1176131697" name="dataflow.gif" path="dataflow.gif" size="23666" user="SergeyYurevich" version="1.1"
META FILEATTACHMENT attr="" comment="data flow for april 2007 beamtime" date="1176132948" name="dataflow_small.gif" path="dataflow_small.gif" size="18474" user="SergeyYurevich" version="1.1"
Changed:
<
<
META TOPICMOVED by="PeterZumbruch" date="1145439973" from="Homepages.DataAccessSchemeBeamtimeMay06PeterZumbruch" to="DaqSlowControl.DataAccessSchemeBeamtimeAprMay06"
>
>
META TOPICMOVED by="SergeyYurevich" date="1176202687" from="DaqSlowControl.DataAccessSchemeBeamtimeAprMay06" to="DaqSlowControl.DataAccessSchemeBeamtimeAprMay07"
Revision 8
09 Apr 2007 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"

Line: 21 to 21
 
    • /ramdisk/copy (for Online DST production and exclusive access)

The size and amount of files on the Event Server RAM disk is controlled via
Changed:
<
<
special arguments of the Event Builder:
>
>
special arguments of daq_evtbuild:
 
  • --ressizelimit 80 (maximum number of files in /ramdisk/res)
Changed:
<
<
  • --secsizelimit 10000 (maximum size in MB of /ramdisk/copy)
>
>
  • --secsizelimit 10000 (maximum size in MB of files in /ramdisk/copy)
 

Therefore these two directories on the RAM disk have the following limitations:
Line: 91 to 91
 
  • RAM Disk QA - Replaced by connect_res script and QA File Server
  • Event Server installed - running (M.Traxler)
Deleted:
<
<

Person to contact

  • ME! Peter Zumbruch, 2885
  • Why? To centralize requests. That's what I am payed for.
 

Howto


Data Flow
Changed:
<
<
-- PeterZumbruch - 04 May 2006
>
>
-- PeterZumbruch - 04 May 2006, -- SergeyYurevich - 09 Apr 2007
 

Revision 7
09 Apr 2007 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"

Data Access Scheme for Beamtime Apr 2007

Changed:
<
<
The following picture illustrates the data flow and access policies to the beamtime data.

Data Flow
>
>
The following picture illustrates the data flow for the april 2007 beamtime. data flow for april 2007 beamtime
 

This scheme on one hand provides access for all relevant tasks of data analysis on reasonable time-scales and on the other hand protects the crucial event-building and archiving task by forbidding direct data access to the event builder.

ALERT! The data file size is reduced to a size of 0.5 GByte per file for better file handling ALERT!

Data Pools

Changed:
<
<
The data flow from HADES to the event builder (EB) branches out to different data pools:
>
>
daq_netmem process receives subevents from the readout and inserts them to the corresponding buffers. The event builder (daq_evtbuild) reads the buffers and builds an event from the subevents. Then the data flow branches out to different data pools:
  • Local disks
  • Event Server (hadeb06) RAM disk. RAM disk has two directories:
    • /ramdisk/res (reduced hld files for Go4 online monitoring)
    • /ramdisk/copy (for Online DST production and exclusive access)

The size and amount of files on the Event Server RAM disk is controlled via special arguments of the Event Builder:
  • --ressizelimit 80 (maximum number of files in /ramdisk/res)
  • --secsizelimit 10000 (maximum size in MB of /ramdisk/copy)

Therefore these two directories on the RAM disk have the following limitations:
  • /ramdisk/res = 2 GB maximum size
  • /ramdisk/copy = 10 GB maximum size
 

TAPE-ARCHIVING: The protected data flow to the tape archive

Changed:
<
<
  • The eventbuilder writes the files to its local disks.
>
>
  • The Event Builder writes the files to its local disks.
 
  • The archiver (HADES Archivist) reads periodically data from those data disk and archives them to the tape robot archives
Changed:
<
<
  • This is the only process which is allowed to read the data disks of the eventbuilder directly.
>
>
  • This is the only process which is allowed to read the data disks of the eventbuilder directly.
 

Go4/QA online monitor: reduced but fast data flow for online monitoring

Changed:
<
<
  • The eventbuilder writes blockwise 1000 events into a small file on a 1 GB RAM Disk on the QA-RAM-Disk-PC and then waits for 9000 events to write the next files, to get a reduced 1:10 data stream.
>
>
  • The Event Builder writes blockwise 1000 events into a small file to /ramdisk/res on a 12 GB RAM Disk on the Event Server (hadeb06) and then waits for 9000 events to write the next files, to get a reduced 1:10 data stream.
 
    • Those files have the same name as the large file written by the EventBuilder but extended by a continous number:
      • e.g. be06123040506.hld &larrow; be06123040506_X.hld
    • the Run Id in each of those small is identical to the run Id in the large file
Changed:
<
<
  • The QA File Server (lxg0447) retrieves via connect_res continously those small 2MB files to its local disk.
>
>
  • The QA File Server (lxg0447) retrieves via connect_res continously those small 2MB files to its local disk (/data.local2).
 
    • The process controlling the continous retrival runs on lxg0447 as user hades-qa and is described in the Howto Section under Start/Stop connect_res
    • This snapshort archive is exported as /misc/scratch2.lxg0447/qa/hld-snapshot-archive
Changed:
<
<
  • Those small files are read by the Online Monitoring Software (Go4)
>
>
  • Those small files are read by the Online Monitoring Software (Go4), analyzed and the QA plots are displayed on lxg0411.
 
    • The Go4 code uses a new HLD-Source of hydra looking for the latest files available (J.Markert)
  • For the Online Monitor a direct access to the RAM-Disk is still provided, but will be switched off

exclusiv access for special authorized PCs:

Changed:
<
<
  • The eventbuilder writes the same data files which are written to its disk to the RAM disk on the event server.
  • The event server provides a RAM disk of 12 GB which holds then up to 22 files, i.e. about half of shift of data
>
>
  • The Event Builder writes the same data files which are written to its disk to /ramdisk/copy (the RAM disk) on the Event Server.
  • /ramdisk/copy has a size limit of 10 GB (about 20 hld files), i.e. about half of shift of data
 
  • The oldest files are removed when a new file is written.
Changed:
<
<
  • In the upper counting house 5 PCs (2 GB RAM, 3 GHz HT, 2 x 160 GB internal hard disk, connected to GB-Ethernet via marked cables) are set up for the different detector groups:
>
>
  • In the upper counting house 5 PCs (2 GB RAM, 3 GHz HT, 2 x 160 GB internal hard disk, connected to Ethernet via marked cables) are set up for the different detector groups:
 
    • lxg0440 : RICH
    • lxg0441 : MDC
    • lxg0442 : START/VETO/TRIGGER
    • lxg0443 : TOF/TOFino
    • lxg0444 : Shower
Changed:
<
<
    • lxg0445 : online DST production (lower counting house)
    • lxg0446 : online DST production (lower counting house)
>
>
    • lxg0451 : online DST production (lower counting house)
    • lxg0452 : online DST production (lower counting house)
 
  • Only those 5 + 2 online PCs are allowed/able to access the data on the RAM disk of the Event Server via the script get_hld() provided by M.Traxler.
    • This script copies (avoiding NFS) the last closed file(s) from the RAM disk to the local disks of those PCs
  • The groups are themself responsible to retrieve the data from the eventserver.
Line: 54 to 70
 
      • lxg0442: none
      • lxg0443: scratch.local and scratch.local2 (visible to lxg04* and lxi*)
      • lxg0444: none
Changed:
<
<
      • lxg0445: scratch.local and scratch.local2 (visible to lxg04* and lxi*)
      • lxg0446: scratch.local and scratch.local2 (visible to lxg04* and lxi* and webserver)
>
>
      • lxg0451: scratch.local and scratch.local2 (visible to lxg04* and lxi*)
      • lxg0452: scratch.local and scratch.local2 (visible to lxg04* and lxi*)
 
      • lxg0447: scratch.local and scratch.local2 (visible to lxg04* and lxi* and webserver)

Added:
>
>
Export of local disks of online DST pc's to a webserver is not granted any more.
 

DaqSniff -like connect-res

  • The program connect-res provides for a stream like look access to a stream-like source
  • see Howto section connect_res
Line: 81 to 99
 
Added:
>
>
Data Flow
  -- PeterZumbruch - 04 May 2006

Line: 89 to 109
 
META FILEATTACHMENT attr="" comment="Data Flow" date="1146756094" name="data-access.gif" path="data-access.gif" size="27019" user="PeterZumbruch" version="1.3"
META FILEATTACHMENT attr="" comment="PPT of data flow" date="1146756175" name="data-access.ppt" path="data-access.ppt" size="497152" user="PeterZumbruch" version="1.3"
META FILEATTACHMENT attr="" comment="PDF of data flow" date="1146756339" name="data-access.pdf" path="data-access.pdf" size="560939" user="PeterZumbruch" version="1.2"
Added:
>
>
META FILEATTACHMENT attr="" comment="data flow for april 2007 beamtime" date="1176131697" name="dataflow.gif" path="dataflow.gif" size="23666" user="SergeyYurevich" version="1.1"
META FILEATTACHMENT attr="" comment="data flow for april 2007 beamtime" date="1176132948" name="dataflow_small.gif" path="dataflow_small.gif" size="18474" user="SergeyYurevich" version="1.1"
 
META TOPICMOVED by="PeterZumbruch" date="1145439973" from="Homepages.DataAccessSchemeBeamtimeMay06PeterZumbruch" to="DaqSlowControl.DataAccessSchemeBeamtimeAprMay06"
Revision 6
04 Apr 2007 - Main.HadesDaq
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"

Changed:
<
<

Data Access Scheme for Beamtime Apr/May 2006

>
>

Data Access Scheme for Beamtime Apr 2007

  The following picture illustrates the data flow and access policies to the beamtime data.

Data Flow
Revision 5
04 Aug 2006 - Main.PeterZumbruch
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"
Changed:
<
<
Warning: Can't find topic Homepages.JavaScriptToggleImages
>
>

 

Data Access Scheme for Beamtime Apr/May 2006

The following picture illustrates the data flow and access policies to the beamtime data.
Revision 4
09 May 2006 - Main.PeterZumbruch
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"
Deleted:
<
<
<--
	By default, this topic can be changed/renamed by the original author, only.  

To protect this topic from being read by other users, put a '* ' in front of the following line. Set ALLOWTOPICVIEW = PeterZumbruch -->
  Warning: Can't find topic Homepages.JavaScriptToggleImages

Data Access Scheme for Beamtime Apr/May 2006

Line: 34 to 26
 
    • the Run Id in each of those small is identical to the run Id in the large file
  • The QA File Server (lxg0447) retrieves via connect_res continously those small 2MB files to its local disk.
    • The process controlling the continous retrival runs on lxg0447 as user hades-qa and is described in the Howto Section under Start/Stop connect_res
Changed:
<
<
    • This snapshort archive is exported as /misc/scratch.lxg0447/qa/hld-snapshot-archive
>
>
    • This snapshort archive is exported as /misc/scratch2.lxg0447/qa/hld-snapshot-archive
 

  • Those small files are read by the Online Monitoring Software (Go4)
    • The Go4 code uses a new HLD-Source of hydra looking for the latest files available (J.Markert)
Revision 3
04 May 2006 - Main.PeterZumbruch
Line: 1 to 1
 
META TOPICPARENT name="HadesDaqDocumentation"
Warning: Can't find topic Homepages.JavaScriptToggleImages
Changed:
<
<
>
>
 

Data Access Scheme for Beamtime Apr/May 2006

The following picture illustrates the data flow and access policies to the beamtime data.
Line: 19 to 19
 

ALERT! The data file size is reduced to a size of 0.5 GByte per file for better file handling ALERT!

Changed:
<
<
The data flow from HADES to the event builder (EB) branches out to three different data pools:
>
>

Data Pools

The data flow from HADES to the event builder (EB) branches out to different data pools:
 
Changed:
<
<
  • TAPE-ARCHIVING: The protected data flow to the tape archive
>
>

TAPE-ARCHIVING: The protected data flow to the tape archive

 
    • The eventbuilder writes the files to its local disks.
    • The archiver (HADES Archivist) reads periodically data from those data disk and archives them to the tape robot archives
    • This is the only process which is allowed to read the data disks of the eventbuilder directly.

Changed:
<
<
  • Go4/QA online monitor: reduced but fast data flow for online monitoring
>
>

Go4/QA online monitor: reduced but fast data flow for online monitoring

 
    • The eventbuilder writes blockwise 1000 events into a small file on a 1 GB RAM Disk on the QA-RAM-Disk-PC and then waits for 9000 events to write the next files, to get a reduced 1:10 data stream.
Changed:
<
<
    • The RAM-disk can be seen from the Go4 monitor PC
>
>
    • Those files have the same name as the large file written by the EventBuilder but extended by a continous number:
      • e.g. be06123040506.hld &larrow; be06123040506_X.hld
    • the Run Id in each of those small is identical to the run Id in the large file
  • The QA File Server (lxg0447) retrieves via connect_res continously those small 2MB files to its local disk.
    • The process controlling the continous retrival runs on lxg0447 as user hades-qa and is described in the Howto Section under Start/Stop connect_res
    • This snapshort archive is exported as /misc/scratch.lxg0447/qa/hld-snapshot-archive

  • Those small files are read by the Online Monitoring Software (Go4)
 
      • The Go4 code uses a new HLD-Source of hydra looking for the latest files available (J.Markert)
Changed:
<
<
    • In addition these files are copied on the QA-RAM-Disk-PC itself to its hard-disk as an archive of the snapshots
>
>
  • For the Online Monitor a direct access to the RAM-Disk is still provided, but will be switched off
 

Changed:
<
<
  • exclusiv access for special authorized PCs:
>
>

exclusiv access for special authorized PCs:

 
    • The eventbuilder writes the same data files which are written to its disk to the RAM disk on the event server.
    • The event server provides a RAM disk of 12 GB which holds then up to 22 files, i.e. about half of shift of data
    • The oldest files are removed when a new file is written.
Line: 40 to 48
 
      • lxg0440 : RICH
      • lxg0441 : MDC
      • lxg0442 : START/VETO/TRIGGER
Changed:
<
<
      • lxg0443 : Shower
      • lxg0444 : TOF/TOFino
    • Only those 5 PCs are allowed/able to access the data on the RAM disk of the Event Server via the script getLastFiles() provided by M.Traxler.
>
>
    • lxg0443 : TOF/TOFino
    • lxg0444 : Shower
    • lxg0445 : online DST production (lower counting house)
    • lxg0446 : online DST production (lower counting house)
  • Only those 5 + 2 online PCs are allowed/able to access the data on the RAM disk of the Event Server via the script get_hld() provided by M.Traxler.
 
      • This script copies (avoiding NFS) the last closed file(s) from the RAM disk to the local disks of those PCs
    • The groups are themself responsible to retrieve the data from the eventserver.
    • The local disks on those PCs can be made visible/exported to selected users/groups/PCs so that low-priority tasks/users can access the data.
Added:
>
>
    • up to now visible are as /misc/scratch.lxg04xx or /misc/scratch2.lxg0XXX:
      • lxg0440: none
      • lxg0441: scratch.local and scratch.local2 (visible to lxg04* and lxi*)
      • lxg0442: none
      • lxg0443: scratch.local and scratch.local2 (visible to lxg04* and lxi*)
      • lxg0444: none
      • lxg0445: scratch.local and scratch.local2 (visible to lxg04* and lxi*)
      • lxg0446: scratch.local and scratch.local2 (visible to lxg04* and lxi* and webserver)
      • lxg0447: scratch.local and scratch.local2 (visible to lxg04* and lxi* and webserver)

DaqSniff -like connect-res

  • The program connect-res provides for a stream like look access to a stream-like source
  • see Howto section connect_res
 

ALERT! NOTE ALERT!
This scheme describes the access during run conditions. In case of real emergency access to the event builder can be granted, for sure. But these cases should be exceptions.

Status

Changed:
<
<
  • The PC's are ready, but not yet connected, my (P.Zumbruch) task for the next days.
  • getLastFiles() Script ready (M.Traxler)
  • HADES Archivist to be setup for Apr06 (S.Lang)
  • RAM Disk (QA) - to do (P.Zumbruch, IT (K.Miers/C.Huhn)
  • Event Server, we are waiting for the hardware ... hardware (P.Z.), software (M.Traxler)
>
>
  • The PC's are ready
  • get_hld script ready (M.Traxler)
  • connect_res script ready (Radek)
  • HADES Archivist running (S.Lang)
  • RAM Disk QA - Replaced by connect_res script and QA File Server
  • Event Server installed - running (M.Traxler)
 

Person to contact

  • ME! Peter Zumbruch, 2885
  • Why? To centralize requests. That's what I am payed for.
Added:
>
>

Howto

 
Changed:
<
<
-- PeterZumbruch - 19 Apr 2006
>
>
-- PeterZumbruch - 04 May 2006
 

Changed:
<
<
META FILEATTACHMENT attr="" comment="Data Flow" date="1145436899" name="data-access.gif" path="data-access.gif" size="24534" user="PeterZumbruch" version="1.2"
META FILEATTACHMENT attr="" comment="PPT of data flow" date="1145437269" name="data-access.ppt" path="data-access.ppt" size="495104" user="PeterZumbruch" version="1.2"
META FILEATTACHMENT attr="" comment="PDF of data flow" date="1145437351" name="data-access.pdf" path="data-access.pdf" size="1679165" user="PeterZumbruch" version="1.1"
>
>
META FILEATTACHMENT attr="" comment="Data Flow" date="1146756094" name="data-access.gif" path="data-access.gif" size="27019" user="PeterZumbruch" version="1.3"
META FILEATTACHMENT attr="" comment="PPT of data flow" date="1146756175" name="data-access.ppt" path="data-access.ppt" size="497152" user="PeterZumbruch" version="1.3"
META FILEATTACHMENT attr="" comment="PDF of data flow" date="1146756339" name="data-access.pdf" path="data-access.pdf" size="560939" user="PeterZumbruch" version="1.2"
 
META TOPICMOVED by="PeterZumbruch" date="1145439973" from="Homepages.DataAccessSchemeBeamtimeMay06PeterZumbruch" to="DaqSlowControl.DataAccessSchemeBeamtimeAprMay06"
Revision 2
19 Apr 2006 - Main.PeterZumbruch
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="BeamtimeMay06PeterZumbruch"
>
>
META TOPICPARENT name="HadesDaqDocumentation"