Eventbuilder on Dreamplug

Introduction

This page collects experiences with hadaq eventbuilder software installed on dreamplug mini pcs

System description

here other dreamplug specs hardware, linux os etc

hadaq@dreamplug-debian:~$ uname -a
Linux dreamplug-debian 2.6.33.6 #1 PREEMPT Tue Feb 8 03:18:41 EST 2011 armv5tel GNU/Linux

root@ee-dev007:~# uname -a
Linux ee-dev007 3.0.4 #1 PREEMPT Tue Aug 30 19:56:02 MDT 2011 armv5tel GNU/Linux

Installation at gsi

describe here how this is done with lxhadesdaq file exports, control system etc.

The installed eventbuilder software is mounted via lxhadesdaq:

lxhadesdaq.gsi.de:/var/diskless/dreamplug/hadaq/ 128561664 67476480  61085184  53% /home/hadaq/lxhadesdaq

The local executables at /home/hadaq/evtbuild/bin are just soft links to this partition.

Installed binaries/libraries at lxhadesdaq for dreamplug were compiled with native gcc on dreamplug=ee-dev007=

Installation at LIP/Coimbra

This is intended as additional DAQ system for LAND/R3B test beamtime at gsi.

Set up

On dreamplug the eventbuiler software is installed at /home/hadaq/evtbuild. The working directory is /home/hadaq/oper.

Start scripts in oper dir:
  • startevtbuild.sh - for tests at ui1
  • startnetmem.sh
  • startevtbuild_lipc_file.sh - for tests at petdaq.lipc
  • startnetmem_lipc.sh

Tests and Benchmarks

The tests were performed using the daq_rdosoft/daq_memnet simulator originally by Sergei Yurevich. This software has been compiled and installed both on ui1.coimbra.lip.pt (100MbitE), and petdaq.lipc (GbitE).

Data sender on 100MbE

cat /sys/class/net/eth1/speed 100

Simulator software is installed on ui1.coimbra.lip.pt at /home/adamczew/hadaq/installation/oper Script is start_rdo_dreamplug.pl

# data sender start_rdo_dreamplug.pl setup:
# 'C' => { 'ebnum'  => '1',
#                  'spill'  => '20',
#                  'rate'   => '',
#                  'size'   => '',
#                  'wmark'  => '56000',
#                  'queue'  => '8000000'} );
# 5 datastreams/udp ports, rate/size for each stream
# output to /mnt/sdc/data 

rate size Btotal / Mbs-1 Evts/s evtbuild_cpu netmem_cpu file
1000 1000 5.1 980 0.24 0.025 te12157104211.hld
2000 1000 9.8 1890 0.52 0.054 te12157105011.hld
5000 1000 12.0 2400 0.52 0.054 te12157105427.hld
10000 1000 12.0 2450 0.68 0.075 te12157105751.hld
20000 1000 12.0 2400 0.70 0.061 te12157110156.hld
1000 2000 10.0 1000 0.39 0.050 te12157110540.hld
1000 5000 12.0 490 0.37 0.064 te12157110926.hld
1000 10000 12.0 240 0.33 0.062 te12157111441.hld

Conclusion: Data rate is limited by network, not by disk speed here. Additional tests on GbE required.

Data sender on GbE

cat /sys/class/net/eth1/speed 1000

Simulator software is installed on petdaq.lipc at /home/pet/daqsoftware/debug/dreamplugtest/ Start scripts:
  • start_rdo_dreamplug.pl - as above, sends 5 datastreams
  • start_rdo_dreamplug_onesender.pl - send on one datastream only (use case for land beamtime?)

General observations
  1. If used memory of daq_netmem or daq_evtbuild exceed limits, the kernel "Out of memory killer" will terminate one of these processes. This happens if shared memory queue sizes are chosen too big (dreamplug has 500MByte only). This happens whenever queue is running full, i.e. in case of high data rates. Mostly this affected the daq_netmem process. If reducing the queue sizes, the discarded events/packets may increase, but eventbuilder will not be terminated anymore.
  2. Peak data rates as shown in eventbuilder terminal is about 1.5 times bigger than average recorded data rate from file sizes. This is due to the spill mechanism of the frontend simulator.

Tests with one datastream

First shell: ../evtbuild/bin/daq_evtbuild -q 32 -m 1 -x te -d file -o /mnt/disk/data -S 1

(or -d null for tests without file)

Second shell:

../evtbuild/bin/daq_netmem -q 32 -m 1 -i 10101 -S 1

On petdaq:

/home/pet/daqsoftware/debug/dreamplugtest/start_rdo_dreamplug_onesender.pl

(finish test with killall daq_rdosoft; killall daq_memnet; and close xterm with "udp master")

The test will issue "spills" of 20 s length with maximum rates as described.

Follwing values are evaluated as such:
  • cpu and memory consumption of daq_evtbuild and daq_netmem evaluated with top at maximum rate
  • shared memory queue fill percentage (queue) from daq_netmem and daq_evtbuild terminal output
  • Bandwidth, Evts/s from daq_evtbuild terminal output
  • Received bandwidth (per channel), disarded packets/messages from daq_netmem terminal output

rdosoft sender daq_eventbuild daq_netmem    
rate size Bmax (Mbs-1) (Evts/s) cpu mem queue Brcv(Mbs-1) disc/received pckts cpu mem queue comment file
5000 5000 18   60% 26% 47% 25 22/48 30% 26% 47%   yes
2000 10000 20 2000 50% 3% 0% 20 0/39k 23% 3% 1% No data loss, stable operation yes
3000 10000 21 2000 56% 25% 35% 29 7/17 32% 25% 63%   yes
5000 10000 18 ? 41% 25% ? 35 86/37 41% 25% ?   yes
10000 1000 10 ? 85% 11% 63% 10 0/14 11% 11% 61% no data loss yes
10000 2000 18 6000 66% 25% 35% 18 9/20 24% 25% 63%   yes
10000 5000 10 1500 48% 25% ? 47 ? 41% 25% ?   yes
20000 500 7 10000 86% 25% 63% 10 4/12 11% 25% 48%   yes
20000 1000 11 ? 62% 26% 63% 21 75/70 25% 26% 35%   yes
20000 1000 10 ? ? ? ? 20 80/90 ? ? ? limited by disc msgs no file
1000 20000 20 1000 38% 1.6% 0% 20 0/24k 23% 1.6% 0% No data loss, stable operation yes
1000 30000 29 1000 47% 25% 63% 30 3/32 31% 25% 35%   yes
1000 40000 20 ? 45% 26% 63% 36 46k/42k 39% 26% 35%   yes
1000 40000 40 1000 46% 1.7% 1% 40 0 41% 1.7% 1% file io seems to slow down, compare above no file
1000 50000 35 ? 46% 2% 2% 32 4/13 46% 2% 3% wmark=200000 no effect, netmem discards msg instead of pkts no file

Tests with 5 datastreams

First shell: ../evtbuild/bin/daq_evtbuild -q 6 -m 5 -x te -d file -o /mnt/disk/data -S 1 (or -d null for tests without file)

Second shell: ./evtbuild/bin/daq_netmem -q 6 -m 5 -i 10101 -i 10102 -i 10103 -i 10104 -i 10105 -S 1

On petdaq:

/home/pet/daqsoftware/debug/dreamplugtest/start_rdo_dreamplug.pl

NOTE: Queuesize was finally set to 6Mbyte mostly to avoid running out of memory, see above. First measurments with -q 32 are indicated

rdosoft sender daq_eventbuild daq_netmem    
rate size Bmax (Mbs-1) (Evts/s) cpu mem queue Brcv(Mbs-1) disc/received pckts cpu mem queue comment file
5000 500 17 5000 15% 80% ? ? ? 21% 80% ? -q 32 yes
10000 500 7 3000 50% 50% ? 25 ? 30% 41% ? -q 32, aborted by OOMkiller yes
1000 5000 24 1000 15% 80% ? ? ? 21% 80% ? -q 32, stable long time run, file average 16Mbyte/s yes
1000 10000 14 1000 42% 24% 98% 50 4/4 42% 24% 98% -q 6, stable yes
1000 10000 30 ? 50% 24% 99% 32 1/4 42% 24% 98% -q 6, stable no file

Tests with 2 datastreams

First shell: ../evtbuild/bin/daq_evtbuild -q 20 -m 2 -x te -d file -o /mnt/disk/data -S 1 (or -d null for tests without file)

Second shell: ./evtbuild/bin/daq_netmem -q 20 -m 2 -i 10101 -i 10102 -S 1

On petdaq:

/home/pet/daqsoftware/debug/dreamplugtest/start_rdo_dreamplug_twosenders.pl

NOTE: Queuesize was finally set to 20Mbyte mostly to avoid running out of memory, see above.

rdosoft sender daq_eventbuild daq_netmem    
rate size Bmax (Mbs-1) (Evts/s) cpu mem queue Brcv(Mbs-1) disc/received pckts cpu mem queue comment file
1000 20000 23 ? 48% 26% 99% 40 10/8 37% 26% 80-99% -q 16, average file rate 16 Mbyte/s yes
1000 20000   ?                 -q 32 aborted! yes
1000 20000 30 ? 44% 32% 99% 40 ? 36% 32% 99% -q 20 yes
1000 20000 41 ? 51% 2.7% 5% 40 0/16k 40% 2.7% 4% -q 20 no file
1500 20000 21 ? 51% 32% 99% 40 1.5k/16k 50% 32% 99% -q 20 many discarded events (30%) in evtbuild, in addition to disc pckts no file
1500 20000 10 ? ? ? ? 40 ? ? ? ? -q 20 many discarded events (30%) in evtbuild, in addition to disc pckts file

Conclusions
  • Maximum datarate with one datastream: 40 Mbyte/s (no file), and 20 Mbyte/s with writing to usb disk (rate=1kHz, size=40k). Clearly the io slows down eventbuilding and leads to more data loss.
  • Maximum datarate with two datastreams: 41 Mbyte/s (no file), and 30 Mbyte/s with writing to usb disk (rate=1kHz, size=20k). Clearly the io slows down eventbuilding and leads to more data loss.
  • Maximum eventbuilder datarate with 5 datastreams: 30 Mbyte/s (no file), and 24 Mbyte/s with writing to usb disk (rate=1kHz, size=5k/10k). This is limited by the necessarily reduced queue size (6Mb per message stream), ohterwise system will remove our process due to memory exhaustion (we may extend those limits probably?)
  • Low event rates with big event sizes tend to show less netmem loss than high event rates with small event size.
  • Maximum average recorded data rate from file about 16 Mbyte/s (stable setup with no event loss)


-- JoernAdamczewski - 10 Jul 2012
Topic revision: r1 - 2012-07-10, JoernAdamczewski
Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki Send feedback | Imprint | Privacy Policy (in German)