Difference: DaqUpgradeOverview (1 vs. 33)

Revision 33
31 Dec 2009 - Main.JanMichel
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="WebHome"
>
>
META TOPICPARENT name="DaqUpgrade"
 

DAQ Upgrade

Line: 237 to 237
  As we need the same thing for the IPU-Data-Paths to the MU/ComputeNodes, we want to use the same link.

Please have a look to Ingos documentation.
Changed:
<
<
http://hades-wiki.gsi.de/cgi-bin/view/DaqSlowControl/NewTriggerBus
>
>
http://hades-wiki.gsi.de/cgi-bin/view/DaqSlowControl/DaqNetwork
 

-- MichaelTraxler - 02 Mar 2006
Revision 32
27 Jan 2009 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 321 to 321
 

-- SergeyYurevich - 14 Jun 2007
Deleted:
<
<

Network

I would like to start a discussion on the structure of our private network. Our 10Gbit switch will be "divided" into two VLANs: GSI VLAN and our private HADES VLAN.All the TRBs (plus 1Gbit switches if needed) will form HADES VLAN. All lxg machines will stay in GSI VLAN.

To configure the switch I propose the following:
  • HADES VLAN: 38 ports
    • 6 ports will be taken by the data links from TRBs (or switches) according to the Fig. below
    • 16 ports will be taken by four EBs: 4x4x1Gbit (each EB will have 4x1Gbit link aggregation)
    • 1 port for hadesdaq
    • 15 ports left for other our experimental needs
  • GSI VLAN: 10 ports
    • 1 port for hadesdaq (will be the only publicly accessible machine which will see HADES VLAN)
    • 3 downlinks to three 1Gbit netgear switches
    • 6 ports are left

The number of the 1Gbit links (used for the data transport) connected to 10Gbit switch is given in the figure below. The light blue rectangles define intermediate layer which can be optical hubs or switches.

Fig. Network structure:
network structure

-- SergeyYurevich - 14 Jan 2009
 

META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
Revision 31
26 Jan 2009 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Revision 30
14 Jan 2009 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 325 to 325
 

I would like to start a discussion on the structure of our private network. Our 10Gbit switch will be "divided" into two VLANs: GSI VLAN and our private HADES VLAN.All the TRBs (plus 1Gbit switches if needed) will form HADES VLAN. All lxg machines will stay in GSI VLAN.
Added:
>
>
To configure the switch I propose the following:
  • HADES VLAN: 38 ports
    • 6 ports will be taken by the data links from TRBs (or switches) according to the Fig. below
    • 16 ports will be taken by four EBs: 4x4x1Gbit (each EB will have 4x1Gbit link aggregation)
    • 1 port for hadesdaq
    • 15 ports left for other our experimental needs
  • GSI VLAN: 10 ports
    • 1 port for hadesdaq (will be the only publicly accessible machine which will see HADES VLAN)
    • 3 downlinks to three 1Gbit netgear switches
    • 6 ports are left

The number of the 1Gbit links (used for the data transport) connected to 10Gbit switch is given in the figure below. The light blue rectangles define intermediate layer which can be optical hubs or switches.

Fig. Network structure:
network structure

-- SergeyYurevich - 14 Jan 2009
 
META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141394896" name="DAQ_Upgrade_Architecture_overview.jpg" path="DAQ_Upgrade_Architecture_overview.jpg" size="77463" user="MichaelTraxler" version="1.1"
Line: 334 to 352
 
META FILEATTACHMENT attr="h" comment="Compute Node from Giessen" date="1141401411" name="CN_giessen1_500x500.jpg" path="CN_giessen1_500x500.jpg" size="94768" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="parallel Event Building scheme" date="1181828120" name="proposed_eb_small.jpg" path="proposed_eb_small.jpg" size="40566" user="SergeyYurevich" version="1.1"
META FILEATTACHMENT attr="h" comment="Upgrade architecture overview 800x600" date="1193419774" name="DAQ_Upgrade_2_800_600.png" path="DAQ_Upgrade_2_800_600.png" size="83999" user="MichaelTraxler" version="1.1"
Added:
>
>
META FILEATTACHMENT attr="" comment="network structure" date="1231969738" name="net.jpg" path="net.jpg" size="107704" user="SergeyYurevich" version="1.1"
META FILEATTACHMENT attr="" comment="network structure" date="1231970972" name="net_corr_cut.jpg" path="net_corr_cut.jpg" size="120328" user="SergeyYurevich" version="1.2"
META FILEATTACHMENT attr="" comment="network structure" date="1231972299" name="net_cut.jpg" path="net_cut.jpg" size="130051" user="SergeyYurevich" version="1.1"
Revision 29
14 Jan 2009 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 316 to 316
 
  • Estimate of the data rate in Au-Au collisions is about 200 MB/s. It has two impacts:
    • We have 1 Gbit optical line which can transport only 100 MB/s to a data mover at a tape robot side. Thus we will need to install 10 Gbit slot and 10 Gbit fibre-optic cable.
    • At the moment there are only 4 tape drives (with a speed of 80 MB/s each) which means theoretically 320 MB/s in total. IT department plans to buy two more tape drives which will increase a total till 480 MB/s. This still means that for the beam time almost all robots will be required to serve Hades. The maximum number of tape drives foreseen in the GSI mass storage system is 8/10.
Changed:
<
<
  • Some info: GSI mass storage system will be upgraded with IBM TotalStorage 3494 Tape Library with a capacity of 1.6 PBytes.
>
>
  • Some info: GSI mass storage system will be upgraded with IBM TotalStorage 3494 Tape Library with a capacity of 1.6 PBytes.
 
  • Prices: 700 GB tape = 130 euro, tape drive = 20000 euro.

-- SergeyYurevich - 14 Jun 2007
Added:
>
>

Network

I would like to start a discussion on the structure of our private network. Our 10Gbit switch will be "divided" into two VLANs: GSI VLAN and our private HADES VLAN.All the TRBs (plus 1Gbit switches if needed) will form HADES VLAN. All lxg machines will stay in GSI VLAN.
 
META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141394896" name="DAQ_Upgrade_Architecture_overview.jpg" path="DAQ_Upgrade_Architecture_overview.jpg" size="77463" user="MichaelTraxler" version="1.1"
Revision 28
17 Dec 2008 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 6 to 6
 

This document gives an overview of why and how we want to upgrade our HADES-DAQ/Trigger-System.
Added:
>
>
The upgrade at a glance smile .

 

Motivation for an Upgrade

For light systems the DAQ shows approx. the following performance:
Revision 27
02 Oct 2008 - Main.MichaelBoehmer
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 285 to 285
 

Debug data block

More information can be found here: DaqUpgradeSubEventDebugBlock.
Added:
>
>

Advanced debug data block

More information can be found here: DaqUpgradeSubEventAdvancedDebugBlock.
  -- Michael Boehmer - 2007-03-20

Parallel Event Building

Revision 26
14 Aug 2008 - Main.IngoFroehlich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 282 to 282
  More information can be found here: RichFEEandDAQUpgrade.
Added:
>
>

Debug data block

More information can be found here: DaqUpgradeSubEventDebugBlock.
  -- Michael Boehmer - 2007-03-20

Parallel Event Building

Revision 25
06 Mar 2008 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 159 to 159
 

Compute Node

Wolfgang Kühn proposed to use a compute node, which will be developed in Gießen. It is a very versatile module.
Changed:
<
<
It consists of an array of FPGAs, with a set of IO capabilities. The following picture show a block-diagram of what is intened to be built in Giessen:
>
>
It consists of an array of FPGAs, with a set of IO capabilities. The following picture show a block-diagram of what was intended to be built in Giessen:
 

Overview, Compute Node
Line: 170 to 170
 
  1. optical links using the Xilinx-propritary MGTs
  2. memory
Added:
>
>
In February 2008 the first hardware is available. Pictures can be found here Pictures of CN.
 

IPU/Trigger-Link

The only hard requirement we have to fulfill to include such a Compute-Node into the proposed Trigger-Scheme is the use of an optical GB-Transceiver at a transfer speed of
Revision 24
10 Dec 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 176 to 176
  2 GBit. Industry standard is SFP: for example 2GBit: V23818_K305_B57 from Infineon (you can get these things from approx. 20 vendors). We use now: FTLF8519P2BNL (35€/piece) The link-layer protocol we want to use has some limits imposed by the SerDes-Chips on the TRB. The current choice is the TLK2501 from TI. It is a 1.5 to 2.5 Gbps Transceiver and can directly be connected to the SFP-Transceiver. It uses a 8B/10B encoding to transmit data, but the user uses it just as a 16-bit FIFO, which means that we are limited to a wordlength which has to be a multiple of 16 bits (not really a limit).
Changed:
<
<
The TrbNet is the protocol layer used (see below). This protocol (VHDL-code and support) is provided by Ingo with his students C- Schrader and J. Michel.
>
>
The TrbNet is the protocol layer used (see below). This protocol (VHDL-code and support) is provided by Ingo Fröhlich and Jan Michel.
 

TRB Architechture

Revision 23
26 Oct 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 82 to 82
 

Most important: The upgrade will be done in a way, so that the whole old system can stay as it is. The will be no time, where we have to change electronics in all subsystems at once, we want to try to do it step by step.
Changed:
<
<
Overview
>
>
Overview
 

The key components are the following:
Line: 313 to 313
 
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141399272" name="DAQ_TRBV20_block_600x500.jpg" path="DAQ_TRBV20_block_600x500.jpg" size="30100" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="Compute Node from Giessen" date="1141401411" name="CN_giessen1_500x500.jpg" path="CN_giessen1_500x500.jpg" size="94768" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="parallel Event Building scheme" date="1181828120" name="proposed_eb_small.jpg" path="proposed_eb_small.jpg" size="40566" user="SergeyYurevich" version="1.1"
Added:
>
>
META FILEATTACHMENT attr="h" comment="Upgrade architecture overview 800x600" date="1193419774" name="DAQ_Upgrade_2_800_600.png" path="DAQ_Upgrade_2_800_600.png" size="83999" user="MichaelTraxler" version="1.1"
Revision 22
16 Oct 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 98 to 98
 

The full DAQ system will consist of
  • 24 TRBv2 for RPC (LVL2 trigger implemented on DSP on TRBv2)
Changed:
<
<
  • 24 TRBv2 for MDC
  • 6 TRBv2 for TOF
>
>
  • 12 TRBv2 for MDC
  • 12 TRBv2 for TOF
 
  • 6 or 12 RICH-TRBv2
  • 6 TRBv2 for Shower-Readout
  • 3 TRB for Forward-Wall
  • 4 TRB for Pion-Hodoscopes
Added:
>
>
Total: 73 TRBs + some new and up to now unknown detectors.
 

These boards will be all near the detector and the full raw-data can be transported via the TrbNetwork. If wanted, one can use a Compute-Node instead of a TriggerHub to work on sophisticated LVL2-trigger algorithms. If needed, the complete raw-data is delivered from the detector-TRBs to the compute-node.
Revision 21
15 Jun 2007 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 293 to 293
 
  • Common run ID with timestamp. Common run ID will be distributed to all Event Builders in every sub-event together with trigger tag.
  • Run information will be inserted into Oracle data-base by a script running on EB-1 machine. Start and stop times will be the times of the file from EB-1 and they will be identical for all other Event Builders. Run IDs will be unique for each file as well as the file names. Currently we are thinking about two types of run IDs: a) runID = commonRunID + EB_number, [this might introduce a dead time of 10 sec in case of 10 EBs], b) run ID consists of two numbers: commonRunID and EB_number.
Added:
>
>
Summary of the discussion with Mathias (15.06.2007):

  • Synchronization of EBs is not obvious. One more idea would be to synchronize via a wall clock. However one has still to think about common run IDs.
  • Estimate of the data rate in Au-Au collisions is about 200 MB/s. It has two impacts:
    • We have 1 Gbit optical line which can transport only 100 MB/s to a data mover at a tape robot side. Thus we will need to install 10 Gbit slot and 10 Gbit fibre-optic cable.
    • At the moment there are only 4 tape drives (with a speed of 80 MB/s each) which means theoretically 320 MB/s in total. IT department plans to buy two more tape drives which will increase a total till 480 MB/s. This still means that for the beam time almost all robots will be required to serve Hades. The maximum number of tape drives foreseen in the GSI mass storage system is 8/10.
  • Some info: GSI mass storage system will be upgraded with IBM TotalStorage 3494 Tape Library with a capacity of 1.6 PBytes.
  • Prices: 700 GB tape = 130 euro, tape drive = 20000 euro.

  -- SergeyYurevich - 14 Jun 2007

META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
Revision 20
14 Jun 2007 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 291 to 291
 
  • Synchronous file writing. Synchronization is guaranteed by calibration events. The EB will stop writing current file and start new file when the following condition is fulfilled: currentEvent == calibrationEvent && number of collected events > some_number
  • a second way to ensure synchronization: The CTS (Central Trigger System) sends special triggers which will set some bit in the data-stream, so that all eventbuilders close their files at the same time. [we have to think about some way to resynchronize when escpecially this event is lost in one event-builder.]
  • Common run ID with timestamp. Common run ID will be distributed to all Event Builders in every sub-event together with trigger tag.
Changed:
<
<
  • Run information will be inserted into Oracle data-base by a script running on EB-1 machine. Start and stop times will be the times of the file from EB-1 and they will be identical for all other Event Builders. Run IDs will be unique for each file as well as the file names. Currently we are thinking about two types of run IDs: a) runID = commonRunID + EB_number, b) run ID consists of two numbers: common runID and EB_number.
>
>
  • Run information will be inserted into Oracle data-base by a script running on EB-1 machine. Start and stop times will be the times of the file from EB-1 and they will be identical for all other Event Builders. Run IDs will be unique for each file as well as the file names. Currently we are thinking about two types of run IDs: a) runID = commonRunID + EB_number, [this might introduce a dead time of 10 sec in case of 10 EBs], b) run ID consists of two numbers: commonRunID and EB_number.
 

-- SergeyYurevich - 14 Jun 2007
Revision 19
14 Jun 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 288 to 288
 

Summary of the discussion (13.06.2007):
Changed:
<
<
  • Synchronous file writing. Synchronization is guarantied by calibration events. The EB will stop writing current file and start new file when the following condition is fulfilled: currentEvent == calibrationEvent && number of collected events > some_number
>
>
  • Synchronous file writing. Synchronization is guaranteed by calibration events. The EB will stop writing current file and start new file when the following condition is fulfilled: currentEvent == calibrationEvent && number of collected events > some_number
  • a second way to ensure synchronization: The CTS (Central Trigger System) sends special triggers which will set some bit in the data-stream, so that all eventbuilders close their files at the same time. [we have to think about some way to resynchronize when escpecially this event is lost in one event-builder.]
 
  • Common run ID with timestamp. Common run ID will be distributed to all Event Builders in every sub-event together with trigger tag.
  • Run information will be inserted into Oracle data-base by a script running on EB-1 machine. Start and stop times will be the times of the file from EB-1 and they will be identical for all other Event Builders. Run IDs will be unique for each file as well as the file names. Currently we are thinking about two types of run IDs: a) runID = commonRunID + EB_number, b) run ID consists of two numbers: common runID and EB_number.
Revision 18
14 Jun 2007 - Main.SergeyYurevich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 281 to 281
 

-- Michael Boehmer - 2007-03-20
Added:
>
>

Parallel Event Building

  • parallel Event Building scheme:
    parallel Event Building scheme

Summary of the discussion (13.06.2007):

  • Synchronous file writing. Synchronization is guarantied by calibration events. The EB will stop writing current file and start new file when the following condition is fulfilled: currentEvent == calibrationEvent && number of collected events > some_number
  • Common run ID with timestamp. Common run ID will be distributed to all Event Builders in every sub-event together with trigger tag.
  • Run information will be inserted into Oracle data-base by a script running on EB-1 machine. Start and stop times will be the times of the file from EB-1 and they will be identical for all other Event Builders. Run IDs will be unique for each file as well as the file names. Currently we are thinking about two types of run IDs: a) runID = commonRunID + EB_number, b) run ID consists of two numbers: common runID and EB_number.

-- SergeyYurevich - 14 Jun 2007
 
META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141394896" name="DAQ_Upgrade_Architecture_overview.jpg" path="DAQ_Upgrade_Architecture_overview.jpg" size="77463" user="MichaelTraxler" version="1.1"
Line: 288 to 301
 
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141399238" name="DAQ_TRBV10_block_600x500.jpg" path="DAQ_TRBV10_block_600x500.jpg" size="25821" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141399272" name="DAQ_TRBV20_block_600x500.jpg" path="DAQ_TRBV20_block_600x500.jpg" size="30100" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="Compute Node from Giessen" date="1141401411" name="CN_giessen1_500x500.jpg" path="CN_giessen1_500x500.jpg" size="94768" user="MichaelTraxler" version="1.1"
Added:
>
>
META FILEATTACHMENT attr="h" comment="parallel Event Building scheme" date="1181828120" name="proposed_eb_small.jpg" path="proposed_eb_small.jpg" size="40566" user="SergeyYurevich" version="1.1"
Revision 17
14 Jun 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 118 to 118
 
RPC 32 ((2 100 * 0.2 * 20 000 * 4) / 1 024) / 1 024 = 32
TOF 16 ((700 * .3 * 20 000 * 4) / 1 024) / 1 024 = 16
RICH 110 ((29 000 * 0.05 * 4 * 20 000 ) / 1 024) / 1 024 = 110
Changed:
<
<
Shower 20 ((30 * 30 * 3 * 0.2 * 2 * 20 000) / 1 024) / 1 024 = 20
>
>
Shower 62 ((30 * 30 * 3 * 0.1 * 2 * 20 000 * 6) / 1 024) / 1 024 = 62
 
Changed:
<
<
Sum: 312 MBytes/s in spill, 156 MBytes/s sustained. Distributed to 6 eventbuilder this amounts to 26 MBytes/s.
>
>
Sum: 134 + 32 + 16 + 110 + 62 = 354 MBytes/s in spill, 177 MBytes/s sustained. Distributed to 6 eventbuilder this amounts to 30 MBytes/s.
  For Ni+Ni we can expect around half of this data rate.

Comparison: Phenix writes since 2004 350Mbytes/s sustained to tape. With their online compression they reach 600MBytes/s.
Revision 16
21 Mar 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 257 to 257
  During DPG2007 in Giessen several people did meet and discuss another solution, based on an idea of Michael Böhmer, with important proposals from Michael Traxler and Ingo Fröhlich. We concluded before DPG2007 that man power for the HADES DAQ upgrade is too precious to be spent in too many different projects, so we should use common work whereever possible. Another conclusion was to get rid of proprietary bus systems in the new DAQ system, to allow easy maintanance and debugging / error reporting / error tracing during development and operation of the system. Moreover, the re-usage of the work done by Ingo Fröhlich in the new trigger bus allows to concentrate on a new frontend for the RICH, which should also be useful for other detectors (Si, MWPCs, CVDs) and / or experiments (FAIR).
Changed:
<
<
Please note: This design approach is the basic for discussion, and not yet settled!
>
>
Please note: This design approach is the basis for discussion, and not yet settled!
 

The new system will consist of several modules, as listed below:
Revision 15
20 Mar 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 274 to 274
 

For the RICH, we would come up with the following layout: one sector consists of 16 backplanes (with 16 logic modules), each one reading out four / five ADC cards connected to one analogue frontend card. All 16 logic modules connect to one TRB with addon board, concentrating data from one sector (ideal place for some IPU functions). Six TRBs are then sufficient for the whole RICH; one additional TRB with addon can be used to act as a central clock / trigger distribution board.
Deleted:
<
<

Analog Frontend (AFE)

 
Changed:
<
<
(pictures to be inserted soon)
>
>

RICH-FEE, APV25 and more

More information can be found here: RichFEEandDAQUpgrade.
 
Deleted:
<
<
This module connects directly to the RICH padplane by 64pin CSI connectors, replacing the old (mixed analog / digital) FEs. It carries only the analog parts, needed to readout the raw detector signals. Neither power consuming parts nor digital stuff will be placed there, which should allow us to re-use this board also for Si stripe and / or CVD detectors in vacuum.
 
Changed:
<
<
As preamp chip the APV25S1 (designed for CMS) will be used (editors note: in the data sheet replace "VDD" by "+2.50V", "GND" by "+1.25V" and "VSS" by "GND"):

  • 128 channels
  • 40MHz sampling rate
  • shaping time between 50ns and 300ns
  • completely digital control by I2C interface
  • analog "history" for 192 * 25ns
  • adjustable "latency" to look back into past
  • only a readout trigger needed, no real trigger

The AFE will offer 64 analog channels, each one protected by a double diode against discharges from the wire chamber. A small capacitor will AC couple the pad signal to the inputs of the APV25S1 (including a 10M resistor to discharge this coupling capacitor). The remaining 64 unconnected channels can be used for common noise rejection.

A massive GND connector (same style as the old FEs) is foreseen to reduce noise at this place.

The AFE card will provide a hex rotating swich (0x0...0xf) to allow setting the APV25S1 I2C address, allowing more than one AFE card to be connected to one I2C bus. Moreover, a small temperatur sensor must be placed on the AFE, allowing online measurements of temperature in case of vacuum operation (either I2C based, or single wire protocol).

The connection to the ADC card will be a normal pinheader (20 pins, RM2.54mm) to allow a standard flat ribbon cable with twisted pairs to be used if the AFE card is to be used in vacuum. A short note: operation in vacuum has been tested once during a test beam time at the Garching Tandem accelerator.

Power supplies: we will place the LDOs for the APV chips on the passive backplane, based either on the 48V POLA concept, or on the +5V / +3.3V linear regulated power supplies already in use for the old system. This is to be discussed, as replacing all the power supplies will include quite some work (cabling, testing, ...)

Power requirements: the AFE board needs +1.25V @165mA and +2.50V @243mA (worst case numbers, taken from APV data sheet). In normal operation we need +1.25V @65mA and +2.50V @90mA.

Main questions still open:

  • availability of Known Good Dies (1000 pieces APV25S1)
  • bonding of APV (serial production, for prototyping solved)
  • protection of bonded APVs (glue blob?)
  • internal or external reference bias

Some of these questions will hopefully be answered in cooperation with the specialists from E18, which already used the APV25S1 for upgrading the COMPASS RICH.

ADC card (ADC)

(pictures to be inserted soon)

The next chain link is the ADC board. It connects on the detector side to the AFE board, and on the DAQ side to the passive backplane. This board contains mainly the ADC and all needed components for adjusting the APV differential output to the ADC, as well as decoupling components for the ADC. By separating the preamp part from the ADC part we hope to meet two main goals:

  • easy upgrades in case we need more ADC bits
  • better noise figures by keeping the high speed LVTTL outputs from the ADC away from analog parts

Our first choice in ADC is the Analog Devices AD9136 family, together with a differential opamp AD8138. These two components are already in use on the Bridgeboard prototype used for first APV readout together with CVD detectors. We propose to use 10bit 40MHz ADC, with possibility to upgrade to 12bit 80MHz should this become necessary. The role of the ADC will be discussed later, together with fundamental questions on how to control and trigger the APV25S1 chip in this system.

One technological limitation for our new FEs will be connector sizes. We use now ERNI 50pin finepitch connectors on the old backplanes, and it doesn't look reasonable to go to higher pin densities, as these connectors are fragile. Moreover, we need to send 12bit ADC data with LVTTL levels at 40MHz over this connectors.

From a first educated guess we suggest using a 40pin header on the ADC board to connect to the BP. One of these pins should be used as a simple "module connected" pin, to allow the LM to findout how many AFEs are present on one specific backplane. This should also avoid problems with misaligned connectors, which we know to happen quite often when backplanes are removed for service. We also recommend to use the "look-through-hole" technique which was tested for the HERMES specific backplanes, allowing both optical inspection and an easy access for mechanical guiding during assembly of the FE electronics.

Main questions still open:

  • do we need a DAC for adjusting the gain?
  • are 10bit sufficient?

passive backplanes (BP)

(pictures to be inserted soon)

The backplane (BP) board will take over several tasks in the readout system:

  • mechanical stability: keep the AFE/ADC combination in place
  • concentrate data from four / five AFEs to one logic module (LM)
  • distribute low voltage power to AFEs / ADCs
  • distribute trigger and clock to AFEs

We will need 16 backplane outlines for covering our "piece of cake" shaped sectors, with connectors on the detector side at the same position as in the old system. Therefore a simple board layout with few components is prefered, so we can reuse the old Eagle board files as starting points for board layout. By this we hope to minimize chances for misplaces connectors and mechanical problems. With the RICH geometry in mind we have 14 different styles of BPs to route.

The BP will carry only one active component: LVDS driver chips to distribute differential Clock and Trigger signals to four / five AFEs. Regarding all other signals the BP is a pure passive component.

Main questions still open:

  • power supply concept: 48V POLA or 5V/3.3V from "old" FE power supplies
  • connectors towards the LM
  • decoupling / filtering issues

logic module (LM)

(pictures to be inserted soon)

The logic module integrates the interface to the common DAQ system. It processes data from four / five AFEs / ADCs connected to one backplane. Incoming triggers will be handled there, the ADC control will process data from the AFEs and generate a subevent for four / five AFEs.

Main idea is to use the serial link protocol under development by Ingo Fröhlich, by this we get rid of detector specific readout controllers; moreover we can use s generalized slow control / error tracing solution, as we do not have any proprietary protocols inbetween.

From the current point of view, the following components should be integrated into the LM:

  • one big FPGA (either Lattice SC or Lattice ECP2M)
  • boot rom for FPGA, allowing "live at powerup" - no long bitstream download anymore by software
  • local clock (40MHz)
  • either optical or LVDS connection to TRB
  • LEDs (important feature smile
  • debug connectors

For RICH data processing at frontend level we will need some space, so we better should go for one big FPGA with enough ressources for future enhancements (current XC4000E FPGAs in RICH RCs are 95% full). I would recommend the Lattice SC FPGA, as it offers huge logic ressources, as well as PLLs and DLLs - which will be crucial for adjusting the 40MHz clock for the ADC relative to the APV clock.

Inside the LM FPGA some preprocessing must be done, as we want to take three data samples per pad and trigger, to compensate baseline shifts and to recognize real photon hits from noise. Another point for the Lattice SC is the autoboot by SPI FlashROMs, which are available from several large companies dirty cheap - no need anymore for overpriced special Xilinx boot PROMs. This can reduce costs.

The choice of connection to TRBs (optical / LVDS) has to be based on the expected data rates during experiment, as well as on the need of additional connections (like a central clock / trigger).

APV25S1 - basics of operation

The APV25S1 preamp ASIC was designed for the CMS experiment. It has been used mainly for silicon readout, but recently also to upgrade the COMPASS RICH from GASSIPLEX readout. From our first experiences it seems feasable to get the HADES RICH also working with the APV25S1, giving us more possibilities than the current GASSIPLEX based readout.

Internal structure

The APV25S1 has 128 analogue input channels, each one connected to a 192 cell analogue pipeline. This pipeline can be considered as a ring buffer, which is filled at a fixed sampling rate of 40MHz. By an external "trigger" (namely a readout trigger) one or more of this ring buffer cells can be "taken out" of the ring and be read out, while the rest of the ring buffer is being filled as normal.

This mode of operation can reduce the deadtime for readout significantly.

To point out clearly: in contrast to the GASSIPLEX (which uses a fixed shaping time of 550ns) the APV25 can "look back" into history up to 192 * 25ns = 4.8us, which makes the trigger timing somewhat easier to fulfill.

Slow control

For configuration the APV25S1 has implemented an I2C slave interface. The I2C address of the APV25S1 can be set by five address inputs, while one of these addresses is a "general APV call" address where all APVs will respond.

Besides several registers controlling analogue functions (like bias currents and voltages) there are some registers defining the operation mode of the APV25 (Deconvolution, Peak, Multi), the polarity of input signals and the latency between the write pointer (i.e. cell being written to) and the cell to be readout when being triggered.

Note: the APV25S1 will fail whenever a "repeated start" condition is present on the I2C bus. Only simple "start" and "stop" conditions are allowed. In case of repeated start conditions the complete I2C interface inside the APV will go weird, don't expect any response after this. Only a hard reset will get the APV25 back into communication.

Triggering

The APV25 is controlled in operation by a differential CLK (40MHz) and a differential TRG input. Commands are sent as a three bit sequence:

Sequence Command
000 No OPeration, NOP
100 normal trigger
110 calibration pulse
101 synchronization

These command sequences have to be sent synchronous to the 40MHz CLK signal. The behaviour of the APV25 in case of unknown command sequences is undefined.

The RICH trigger distribution will change from the current system (CTU -> DTUs -> VME backplane -> RCs -> Backplanes -> FEs).

For the HADES RICH upgrade I propose a central 40MHz clock being distributed from one source, together with the TRG lines. Distribution should be done by LVDS signals.

Tickmarks and dataframes

The APV25 has only one differential output, which is used for both digital and analogue data. After configuration by I2C accesses the APV25 must be "started" by a synchronization trigger ("101"), which resets all internal pipelines and sets the latency between write and read pointers in the ring buffer. After synchronization the APV25 will send a "tickmark" every 35 clock cycles, which have dual purpose: first, you can check if all APVs were started synchronously (and if the are responding), second the tickmark defines the starting point of data frames.

It is necessary to synchronize the FPGA logic to these tickmarks, as data frames consist of both digital and analogue data bits, which must be interpreted correctly.

Digital bits are sent as full range swing of the differential output, with "1" and "0" being different in the sign of both lines. Analogue data has a smaller signal swing as digital bits.

If a readout trigger ("100") is sent to the APV, then one (or more) ring buffer cells are locked out and prepared for readout. After a maximum of three tickmarks (depending on the position of the readout trigger relatively to the tickmarks) the APV will send a data frame.

A data frame consists of 140bit of information, starting with three "1" bits as header, followed by eight bits of row address (the cell being read out), one error bit (either readout FIFO overflow or LATENCY error), and 128 analogue bits.

To recognize this data frame one needs to sample the APV25 output lines with an ADC at 40MHz, phase shifted in a way to compensate cable delays.

-- MichaelBoehmer - 20 Mar 2007
>
>
-- Michael Boehmer - 2007-03-20
 

META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
Revision 14
20 Mar 2007 - Main.MichaelBoehmer
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 223 to 223
 
  1. maintainability/reliability: experience has shown, that the current cable connections are not reliable, cables are too big and heavy, very expensive, not easy to extend.

So, we need a very fast (Trigger and Busy), reliable point-to-point connection. The best solution is an optical 2 GBit-Transceiver.
Changed:
<
<
As we need the same thing for the IPU-Data-Paths to the MU/ComputeNodes, we want to uses the same link.
>
>
As we need the same thing for the IPU-Data-Paths to the MU/ComputeNodes, we want to use the same link.
 

Please have a look to Ingos documentation. http://hades-wiki.gsi.de/cgi-bin/view/DaqSlowControl/NewTriggerBus
Line: 266 to 266
 
  • passive backplanes (BP)
  • logic module (LM)
Changed:
<
<
The modules will be described in detail now.
>
>
The modules will be described in detail below.

General layout

As discussed in Giessen there will be a MDC addon for the TRB board, featuring 32 connections over the new trigger bus (either optical or copper), and one differential LVDS line for a trigger. If we modify this approach a bit we could come up with one addon board for RICH and MDC, reducing development time and costs. By using copper lines for the trigger bus (1 uplink, 1 downlink) and adding two general purpose differential lines (uplinks) per port, we could use a standard twisted pair cable (CAT7, upto 600MHz) for distributing both the trigger bus and a physical trigger signal to the FEs.

For the RICH, we would come up with the following layout: one sector consists of 16 backplanes (with 16 logic modules), each one reading out four / five ADC cards connected to one analogue frontend card. All 16 logic modules connect to one TRB with addon board, concentrating data from one sector (ideal place for some IPU functions). Six TRBs are then sufficient for the whole RICH; one additional TRB with addon can be used to act as a central clock / trigger distribution board.
 

Analog Frontend (AFE)

Line: 315 to 321
 
  • easy upgrades in case we need more ADC bits
  • better noise figures by keeping the high speed LVTTL outputs from the ADC away from analog parts
Changed:
<
<
Our first choice in ADC is the Analog Devices AD9136 family, together with a differential opamp AD8138. These two components are alreasy in use on the Bridgeboard prototype used for first APV readout together with CVD detectors. We propose to use 10bit 40MHz ADC, with possibility to upgrade to 12bit 80MHz should this become necessary. The role of the ADC will be discussed later, together with fundamental questions on how to control and trigger the APV25S1 chip in this system.
>
>
Our first choice in ADC is the Analog Devices AD9136 family, together with a differential opamp AD8138. These two components are already in use on the Bridgeboard prototype used for first APV readout together with CVD detectors. We propose to use 10bit 40MHz ADC, with possibility to upgrade to 12bit 80MHz should this become necessary. The role of the ADC will be discussed later, together with fundamental questions on how to control and trigger the APV25S1 chip in this system.
 

One technological limitation for our new FEs will be connector sizes. We use now ERNI 50pin finepitch connectors on the old backplanes, and it doesn't look reasonable to go to higher pin densities, as these connectors are fragile. Moreover, we need to send 12bit ADC data with LVTTL levels at 40MHz over this connectors.
Line: 367 to 373
 
  • LEDs (important feature smile
  • debug connectors
Changed:
<
<
For RICH data processing at frontend level we will need some space, so we better should go for one big FPGA with enough ressources for future enhancements (current XC4000E FPGAs in RICH RCs are 95% full). I would recommend the Lattice SC FPGA.... to be continued...
>
>
For RICH data processing at frontend level we will need some space, so we better should go for one big FPGA with enough ressources for future enhancements (current XC4000E FPGAs in RICH RCs are 95% full). I would recommend the Lattice SC FPGA, as it offers huge logic ressources, as well as PLLs and DLLs - which will be crucial for adjusting the 40MHz clock for the ADC relative to the APV clock.

Inside the LM FPGA some preprocessing must be done, as we want to take three data samples per pad and trigger, to compensate baseline shifts and to recognize real photon hits from noise. Another point for the Lattice SC is the autoboot by SPI FlashROMs, which are available from several large companies dirty cheap - no need anymore for overpriced special Xilinx boot PROMs. This can reduce costs.
 

The choice of connection to TRBs (optical / LVDS) has to be based on the expected data rates during experiment, as well as on the need of additional connections (like a central clock / trigger).
Added:
>
>

APV25S1 - basics of operation

The APV25S1 preamp ASIC was designed for the CMS experiment. It has been used mainly for silicon readout, but recently also to upgrade the COMPASS RICH from GASSIPLEX readout. From our first experiences it seems feasable to get the HADES RICH also working with the APV25S1, giving us more possibilities than the current GASSIPLEX based readout.

Internal structure

The APV25S1 has 128 analogue input channels, each one connected to a 192 cell analogue pipeline. This pipeline can be considered as a ring buffer, which is filled at a fixed sampling rate of 40MHz. By an external "trigger" (namely a readout trigger) one or more of this ring buffer cells can be "taken out" of the ring and be read out, while the rest of the ring buffer is being filled as normal.

This mode of operation can reduce the deadtime for readout significantly.

To point out clearly: in contrast to the GASSIPLEX (which uses a fixed shaping time of 550ns) the APV25 can "look back" into history up to 192 * 25ns = 4.8us, which makes the trigger timing somewhat easier to fulfill.

Slow control

For configuration the APV25S1 has implemented an I2C slave interface. The I2C address of the APV25S1 can be set by five address inputs, while one of these addresses is a "general APV call" address where all APVs will respond.

Besides several registers controlling analogue functions (like bias currents and voltages) there are some registers defining the operation mode of the APV25 (Deconvolution, Peak, Multi), the polarity of input signals and the latency between the write pointer (i.e. cell being written to) and the cell to be readout when being triggered.

Note: the APV25S1 will fail whenever a "repeated start" condition is present on the I2C bus. Only simple "start" and "stop" conditions are allowed. In case of repeated start conditions the complete I2C interface inside the APV will go weird, don't expect any response after this. Only a hard reset will get the APV25 back into communication.

Triggering

The APV25 is controlled in operation by a differential CLK (40MHz) and a differential TRG input. Commands are sent as a three bit sequence:

Sequence Command
000 No OPeration, NOP
100 normal trigger
110 calibration pulse
101 synchronization

These command sequences have to be sent synchronous to the 40MHz CLK signal. The behaviour of the APV25 in case of unknown command sequences is undefined.

The RICH trigger distribution will change from the current system (CTU -> DTUs -> VME backplane -> RCs -> Backplanes -> FEs).

For the HADES RICH upgrade I propose a central 40MHz clock being distributed from one source, together with the TRG lines. Distribution should be done by LVDS signals.

Tickmarks and dataframes

The APV25 has only one differential output, which is used for both digital and analogue data. After configuration by I2C accesses the APV25 must be "started" by a synchronization trigger ("101"), which resets all internal pipelines and sets the latency between write and read pointers in the ring buffer. After synchronization the APV25 will send a "tickmark" every 35 clock cycles, which have dual purpose: first, you can check if all APVs were started synchronously (and if the are responding), second the tickmark defines the starting point of data frames.

It is necessary to synchronize the FPGA logic to these tickmarks, as data frames consist of both digital and analogue data bits, which must be interpreted correctly.

Digital bits are sent as full range swing of the differential output, with "1" and "0" being different in the sign of both lines. Analogue data has a smaller signal swing as digital bits.

If a readout trigger ("100") is sent to the APV, then one (or more) ring buffer cells are locked out and prepared for readout. After a maximum of three tickmarks (depending on the position of the readout trigger relatively to the tickmarks) the APV will send a data frame.

A data frame consists of 140bit of information, starting with three "1" bits as header, followed by eight bits of row address (the cell being read out), one error bit (either readout FIFO overflow or LATENCY error), and 128 analogue bits.

To recognize this data frame one needs to sample the APV25 output lines with an ADC at 40MHz, phase shifted in a way to compensate cable delays.
  -- MichaelBoehmer - 20 Mar 2007

META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
Revision 13
20 Mar 2007 - Main.MichaelBoehmer
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 231 to 231
 

-- MichaelTraxler - 02 Mar 2006
Added:
>
>

RICH readout architecture

The motivation:

The current readout system based on GASSIPLEX chips is limited mainly by the readout controllers and the parallel TTL bus connection between the FEs and the RCs. Moreover, the current IPU hardware is also limited, especially in the case of direct particle hits in the MWPCs, and a higher number of noise pads in the RICH. The possibility to feed a new IPU with "grey scale" pictures instead of black-and-white ones as now would help in the case of direct particle hits, while a better noise reduction in the FEs would help in the latter case.

A possible solution

First solution under discussion was to replace the RCs by TRBs with a RICH specific addon board, and to connect the 16 backplanes per sector by single LVDS links based on an ALTERA proprietary serial link protocol (which is available for free from ALTERA for educational applications).

There would have been several critical points in this design:

  • crossing the abbyss between "technique of the last century" (i.e. 5V FEs) and a modern low voltage FPGA on the backplane
  • still having to design with schematic based tools for the FE FPGAs (XC4005E), based on a ViewLogic license
  • creating 16 different backplanes with fBGA packages
  • maintaining another proprietary protocol (including error reporting, slow control, ...)
  • still living with "old" GASSIPLEX preamps and their limitations
  • (to be continued)

For these reasons this design has been dropped.

A hopefully better solution (the DPG2007 ansatz)

During DPG2007 in Giessen several people did meet and discuss another solution, based on an idea of Michael Böhmer, with important proposals from Michael Traxler and Ingo Fröhlich. We concluded before DPG2007 that man power for the HADES DAQ upgrade is too precious to be spent in too many different projects, so we should use common work whereever possible. Another conclusion was to get rid of proprietary bus systems in the new DAQ system, to allow easy maintanance and debugging / error reporting / error tracing during development and operation of the system. Moreover, the re-usage of the work done by Ingo Fröhlich in the new trigger bus allows to concentrate on a new frontend for the RICH, which should also be useful for other detectors (Si, MWPCs, CVDs) and / or experiments (FAIR).

Please note: This design approach is the basic for discussion, and not yet settled!

The new system will consist of several modules, as listed below:

  • analog frontend card (AFE)
  • ADC card (ADC)
  • passive backplanes (BP)
  • logic module (LM)

The modules will be described in detail now.

Analog Frontend (AFE)

(pictures to be inserted soon)

This module connects directly to the RICH padplane by 64pin CSI connectors, replacing the old (mixed analog / digital) FEs. It carries only the analog parts, needed to readout the raw detector signals. Neither power consuming parts nor digital stuff will be placed there, which should allow us to re-use this board also for Si stripe and / or CVD detectors in vacuum.

As preamp chip the APV25S1 (designed for CMS) will be used (editors note: in the data sheet replace "VDD" by "+2.50V", "GND" by "+1.25V" and "VSS" by "GND"):

  • 128 channels
  • 40MHz sampling rate
  • shaping time between 50ns and 300ns
  • completely digital control by I2C interface
  • analog "history" for 192 * 25ns
  • adjustable "latency" to look back into past
  • only a readout trigger needed, no real trigger

The AFE will offer 64 analog channels, each one protected by a double diode against discharges from the wire chamber. A small capacitor will AC couple the pad signal to the inputs of the APV25S1 (including a 10M resistor to discharge this coupling capacitor). The remaining 64 unconnected channels can be used for common noise rejection.

A massive GND connector (same style as the old FEs) is foreseen to reduce noise at this place.

The AFE card will provide a hex rotating swich (0x0...0xf) to allow setting the APV25S1 I2C address, allowing more than one AFE card to be connected to one I2C bus. Moreover, a small temperatur sensor must be placed on the AFE, allowing online measurements of temperature in case of vacuum operation (either I2C based, or single wire protocol).

The connection to the ADC card will be a normal pinheader (20 pins, RM2.54mm) to allow a standard flat ribbon cable with twisted pairs to be used if the AFE card is to be used in vacuum. A short note: operation in vacuum has been tested once during a test beam time at the Garching Tandem accelerator.

Power supplies: we will place the LDOs for the APV chips on the passive backplane, based either on the 48V POLA concept, or on the +5V / +3.3V linear regulated power supplies already in use for the old system. This is to be discussed, as replacing all the power supplies will include quite some work (cabling, testing, ...)

Power requirements: the AFE board needs +1.25V @165mA and +2.50V @243mA (worst case numbers, taken from APV data sheet). In normal operation we need +1.25V @65mA and +2.50V @90mA.

Main questions still open:

  • availability of Known Good Dies (1000 pieces APV25S1)
  • bonding of APV (serial production, for prototyping solved)
  • protection of bonded APVs (glue blob?)
  • internal or external reference bias

Some of these questions will hopefully be answered in cooperation with the specialists from E18, which already used the APV25S1 for upgrading the COMPASS RICH.

ADC card (ADC)

(pictures to be inserted soon)

The next chain link is the ADC board. It connects on the detector side to the AFE board, and on the DAQ side to the passive backplane. This board contains mainly the ADC and all needed components for adjusting the APV differential output to the ADC, as well as decoupling components for the ADC. By separating the preamp part from the ADC part we hope to meet two main goals:

  • easy upgrades in case we need more ADC bits
  • better noise figures by keeping the high speed LVTTL outputs from the ADC away from analog parts

Our first choice in ADC is the Analog Devices AD9136 family, together with a differential opamp AD8138. These two components are alreasy in use on the Bridgeboard prototype used for first APV readout together with CVD detectors. We propose to use 10bit 40MHz ADC, with possibility to upgrade to 12bit 80MHz should this become necessary. The role of the ADC will be discussed later, together with fundamental questions on how to control and trigger the APV25S1 chip in this system.

One technological limitation for our new FEs will be connector sizes. We use now ERNI 50pin finepitch connectors on the old backplanes, and it doesn't look reasonable to go to higher pin densities, as these connectors are fragile. Moreover, we need to send 12bit ADC data with LVTTL levels at 40MHz over this connectors.

From a first educated guess we suggest using a 40pin header on the ADC board to connect to the BP. One of these pins should be used as a simple "module connected" pin, to allow the LM to findout how many AFEs are present on one specific backplane. This should also avoid problems with misaligned connectors, which we know to happen quite often when backplanes are removed for service. We also recommend to use the "look-through-hole" technique which was tested for the HERMES specific backplanes, allowing both optical inspection and an easy access for mechanical guiding during assembly of the FE electronics.

Main questions still open:

  • do we need a DAC for adjusting the gain?
  • are 10bit sufficient?

passive backplanes (BP)

(pictures to be inserted soon)

The backplane (BP) board will take over several tasks in the readout system:

  • mechanical stability: keep the AFE/ADC combination in place
  • concentrate data from four / five AFEs to one logic module (LM)
  • distribute low voltage power to AFEs / ADCs
  • distribute trigger and clock to AFEs

We will need 16 backplane outlines for covering our "piece of cake" shaped sectors, with connectors on the detector side at the same position as in the old system. Therefore a simple board layout with few components is prefered, so we can reuse the old Eagle board files as starting points for board layout. By this we hope to minimize chances for misplaces connectors and mechanical problems. With the RICH geometry in mind we have 14 different styles of BPs to route.

The BP will carry only one active component: LVDS driver chips to distribute differential Clock and Trigger signals to four / five AFEs. Regarding all other signals the BP is a pure passive component.

Main questions still open:

  • power supply concept: 48V POLA or 5V/3.3V from "old" FE power supplies
  • connectors towards the LM
  • decoupling / filtering issues

logic module (LM)

(pictures to be inserted soon)

The logic module integrates the interface to the common DAQ system. It processes data from four / five AFEs / ADCs connected to one backplane. Incoming triggers will be handled there, the ADC control will process data from the AFEs and generate a subevent for four / five AFEs.

Main idea is to use the serial link protocol under development by Ingo Fröhlich, by this we get rid of detector specific readout controllers; moreover we can use s generalized slow control / error tracing solution, as we do not have any proprietary protocols inbetween.

From the current point of view, the following components should be integrated into the LM:

  • one big FPGA (either Lattice SC or Lattice ECP2M)
  • boot rom for FPGA, allowing "live at powerup" - no long bitstream download anymore by software
  • local clock (40MHz)
  • either optical or LVDS connection to TRB
  • LEDs (important feature smile
  • debug connectors

For RICH data processing at frontend level we will need some space, so we better should go for one big FPGA with enough ressources for future enhancements (current XC4000E FPGAs in RICH RCs are 95% full). I would recommend the Lattice SC FPGA.... to be continued...

The choice of connection to TRBs (optical / LVDS) has to be based on the expected data rates during experiment, as well as on the need of additional connections (like a central clock / trigger).

-- MichaelBoehmer - 20 Mar 2007
 
META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141394896" name="DAQ_Upgrade_Architecture_overview.jpg" path="DAQ_Upgrade_Architecture_overview.jpg" size="77463" user="MichaelTraxler" version="1.1"
Revision 12
19 Mar 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 114 to 114
  20kHz LVL1:

System Max. Data-Rate in MBytes/s in spill comment
Changed:
<
<
MDC 68 ((37 * 8 * 2 * 20 000 * 6) / 1 024) / 1 024 = 67.7
RPC 32 ((2 100 * 0.2 * 20 000 * 4) / 1 024) / 1 024 = 32.0
TOF 16 ((700 * .3 * 20 000 * 4) / 1 024) / 1 024 = 16.0217285
RICH 18 ((40 000 * 0.003 * 4 * 20 000 * 2) / 1 024) / 1 024 = 18
Shower 20 ((30 * 30 * 3 * 0.2 * 2 * 20 000) / 1 024) / 1 024 = 20.5
>
>
MDC 134 ((37 * 8 * 2 * 20 000 * 6 * 2) / 1 024) / 1 024 = 134
RPC 32 ((2 100 * 0.2 * 20 000 * 4) / 1 024) / 1 024 = 32
TOF 16 ((700 * .3 * 20 000 * 4) / 1 024) / 1 024 = 16
RICH 110 ((29 000 * 0.05 * 4 * 20 000 ) / 1 024) / 1 024 = 110
Shower 20 ((30 * 30 * 3 * 0.2 * 2 * 20 000) / 1 024) / 1 024 = 20

Sum: 312 MBytes/s in spill, 156 MBytes/s sustained. Distributed to 6 eventbuilder this amounts to 26 MBytes/s. For Ni+Ni we can expect around half of this data rate.
 
Changed:
<
<
Sum: 154 MBytes/s in spill, 75 MBytes/s sustained. Distributed to 4 eventbuilder this amounts to 19 MBytes/s.
>
>
Comparison: Phenix writes since 2004 350Mbytes/s sustained to tape. With their online compression they reach 600MBytes/s.
 

Eventbuilding of large amounts of data is no problem at all, it is just a matter of number of eventbuilders put in parallel. The big disadvantage: One needs more tapes for mass storage.
Added:
>
>
According to the GSI IT-department (March 2007): writing 100MBytes/s to the tape robot today is "no problem at all".
 
Changed:
<
<
For light systems without LVL2 trigger and 20kHz LVL1 trigger we get about 20 MBytes/s which is no problem at all.
>
>
For light systems without LVL2 trigger and 20kHz LVL1 trigger we get about 20 MBytes/s which is no high demand.
 

Acromag module

Revision 11
05 Mar 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 113 to 113
  In the case we have no LVL2 trigger system running in the beginning of 2009 (heavy ion beam year!) we can expect the following data rates for Au+Au:
20kHz LVL1:
Changed:
<
<
System Max. Data-Rate in MBytes/s in spill
MDC 40
TOF 10
RICH 20
Shower 20
>
>
System Max. Data-Rate in MBytes/s in spill comment
MDC 68 ((37 * 8 * 2 * 20 000 * 6) / 1 024) / 1 024 = 67.7
RPC 32 ((2 100 * 0.2 * 20 000 * 4) / 1 024) / 1 024 = 32.0
TOF 16 ((700 * .3 * 20 000 * 4) / 1 024) / 1 024 = 16.0217285
RICH 18 ((40 000 * 0.003 * 4 * 20 000 * 2) / 1 024) / 1 024 = 18
Shower 20 ((30 * 30 * 3 * 0.2 * 2 * 20 000) / 1 024) / 1 024 = 20.5

Sum: 154 MBytes/s in spill, 75 MBytes/s sustained. Distributed to 4 eventbuilder this amounts to 19 MBytes/s.
 

Eventbuilding of large amounts of data is no problem at all, it is just a matter of number of eventbuilders put in parallel. The big disadvantage: One needs more tapes for mass storage.
Revision 10
05 Mar 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 108 to 108
  These boards will be all near the detector and the full raw-data can be transported via the TrbNetwork. If wanted, one can use a Compute-Node instead of a TriggerHub to work on sophisticated LVL2-trigger algorithms. If needed, the complete raw-data is delivered from the detector-TRBs to the compute-node.
Added:
>
>

Datarates

In the case we have no LVL2 trigger system running in the beginning of 2009 (heavy ion beam year!) we can expect the following data rates for Au+Au:
20kHz LVL1:

System Max. Data-Rate in MBytes/s in spill
MDC 40
TOF 10
RICH 20
Shower 20

Eventbuilding of large amounts of data is no problem at all, it is just a matter of number of eventbuilders put in parallel. The big disadvantage: One needs more tapes for mass storage.

For light systems without LVL2 trigger and 20kHz LVL1 trigger we get about 20 MBytes/s which is no problem at all.
 

Acromag module

The acromag module is an add-on PMC card for VME-CPUs. It will serve as the root-VME interface to the trbnet in the central trigger system. It could serve as well as an intermediate solution to read out the CEAN TDCs, and to read out standard VME modules on a longer timescale (the SIS scaler, latches, whatever). The acromag module will be connected with a 32-line LVDS cable to the hadcom module or the GP-Addon, and the trbnet protocol will be used for this cable. This was also one purpose to design the trbnet, since it will be independent from the medium it will be reused for the optical link.
Revision 9
01 Mar 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 69 to 69
 
  1. Forward-Wall
  2. MDC readout and hit-finder (without the TDCs on the board)
  3. TOF system, if the FEE of TOF will be exchanged
Changed:
<
<
  1. RICH-FEE-readout and Readout-Controler-Functionality (München agreed on that)
>
>
  1. RICH-FEE-readout and Readout-Controller-Functionality (München agreed on that)
 

In the meantime also other experiments are interested in this concept:
  1. For the PANDA gas-chambers
Line: 94 to 94
 
  1. The Trigger-Hubs are just a piece of code in a hardware which is a general purpose compute-node.
Changed:
<
<

More detailed Architechture

>
>

More detailed Architecture

 

The full DAQ system will consist of
  • 24 TRBv2 for RPC (LVL2 trigger implemented on DSP on TRBv2)
Line: 103 to 103
 
  • 6 or 12 RICH-TRBv2
  • 6 TRBv2 for Shower-Readout
  • 3 TRB for Forward-Wall
Changed:
<
<
  • 4 TRB for Pion-Hodoscope
>
>
  • 4 TRB for Pion-Hodoscopes
 

These boards will be all near the detector and the full raw-data can be transported via the TrbNetwork. If wanted, one can use a Compute-Node instead of
Changed:
<
<
a TriggerHub to work on sophisticated LVL2-trigger algorithms.
>
>
a TriggerHub to work on sophisticated LVL2-trigger algorithms. If needed, the complete raw-data is delivered from the detector-TRBs to the compute-node.
 

Acromag module

Line: 126 to 126
  It is just PLD- and DSP-programming smile .
Some documentation about the MU V2 / concentrator is available here: http://www-linux.gsi.de/~michael/mu2_concept5.pdf
Added:
>
>
The second possibility is to use a VME-CPU with the Acromag-PCI-Card to be used as the MU.
 
Changed:
<
<
The VHDL-code for the link-layer of the trigger-protocol will be written by Ingo.
>
>
The VHDL-code for the link-layer of the trigger-protocol is written by Ingo.
 

Compute Node

Line: 146 to 147
 

IPU/Trigger-Link

Changed:
<
<
The only hard requirement we have to include such a Compute-Node into the proposed Trigger-Scheme is the use of an optical GB-Transceiver at a transfer speed of
>
>
The only hard requirement we have to fulfill to include such a Compute-Node into the proposed Trigger-Scheme is the use of an optical GB-Transceiver at a transfer speed of
  2 GBit. Industry standard is SFP: for example 2GBit: V23818_K305_B57 from Infineon (you can get these things from approx. 20 vendors). We use now: FTLF8519P2BNL (35€/piece) The link-layer protocol we want to use has some limits imposed by the SerDes-Chips on the TRB. The current choice is the TLK2501 from TI. It is a 1.5 to 2.5 Gbps Transceiver and can directly be connected to the SFP-Transceiver. It uses a 8B/10B encoding to transmit data, but the user uses it just as a 16-bit FIFO, which means that we are limited to a wordlength which has to be a multiple of 16 bits (not really a limit).
Added:
>
>
The TrbNet is the protocol layer used (see below). This protocol (VHDL-code and support) is provided by Ingo with his students C- Schrader and J. Michel.
 

TRB Architechture

Revision 8
28 Feb 2007 - Main.IngoFroehlich
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 46 to 46
 

measure impact / aim status
VME-Linux CPUs LVL2 rate times 2-4 for MDC and Shower done for all subsystems except RICH (2006-03-03)
Changed:
<
<
TOF-readout and IPU LVL1 rate of TOF system higher than 30 kHz Ingo just started
>
>
TOF-readout and IPU LVL1 rate of TOF system higher than 30 kHz See below
 
MU + concentrator LVL1 rate higher than 30 kHz Marek can start in Summer 2006 on this, Hardware available
RICH FEE LVL1 rate with data higher than 30 kHz, low noise in FEE beg. 2007, Michael Böhmer, project till mid. 2008
MDC readout LVL1 rate > 30kHz + 5 MBytes/s / chamber started end of 2006, MDC-Addon produced, under test
Line: 108 to 108
  These boards will be all near the detector and the full raw-data can be transported via the TrbNetwork. If wanted, one can use a Compute-Node instead of a TriggerHub to work on sophisticated LVL2-trigger algorithms.
Added:
>
>

Acromag module

The acromag module is an add-on PMC card for VME-CPUs. It will serve as the root-VME interface to the trbnet in the central trigger system. It could serve as well as an intermediate solution to read out the CEAN TDCs, and to read out standard VME modules on a longer timescale (the SIS scaler, latches, whatever). The acromag module will be connected with a 32-line LVDS cable to the hadcom module or the GP-Addon, and the trbnet protocol will be used for this cable. This was also one purpose to design the trbnet, since it will be independent from the medium it will be reused for the optical link.

Status: VHDL code ready, simulated on the P2P connection (so no network/routing part up to now). First test are ongong with hardware (C. Schrader, J. Michel) in Frankfurt
 

TRB

The TRB is developed and build in the frame of the EU-FP6-contract. This is the official task of Piotr Salabura. There is a large team working on it and we have a tight schedule to fulfill. Please have a look further down for the time-schedule.
Revision 7
27 Feb 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 86 to 86
 

The key components are the following:
Changed:
<
<
  1. In the end we will have only TRBv2 plattform with detector dependent Addon-boards for the readout of the FEE, LVL1 and LVL2 pipes. This is one main point we learned from the past times: The zoo of different hardware made the maintainance quite complicated and manpower-intensive.
>
>
  1. In the end we will only use TRBv2 platform modules with detector dependent Addon-boards for the readout of the FEE, LVL1 and LVL2 pipes. This is one main point we learned from the past times: The zoo of different hardware made the maintainance quite complicated and manpower-intensive.
 
  1. Data transport is done by Ethernet. No VME-System is needed anymore.
  2. TDC/ADC and readout moves to the FEE. No long cables from the detector to the TDC.
  3. We have a tree structure of the Trigger-System, only point to point links, realized with GB-optical links.
Line: 103 to 103
 
  • 6 or 12 RICH-TRBv2
  • 6 TRBv2 for Shower-Readout
  • 3 TRB for Forward-Wall
Changed:
<
<
  • 3 TRB for Pion-Hodoscope
>
>
  • 4 TRB for Pion-Hodoscope
 

These boards will be all near the detector and the full raw-data can be transported via the TrbNetwork. If wanted, one can use a Compute-Node instead of a TriggerHub to work on sophisticated LVL2-trigger algorithms.
Revision 6
26 Feb 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 48 to 48
 
VME-Linux CPUs LVL2 rate times 2-4 for MDC and Shower done for all subsystems except RICH (2006-03-03)
TOF-readout and IPU LVL1 rate of TOF system higher than 30 kHz Ingo just started
MU + concentrator LVL1 rate higher than 30 kHz Marek can start in Summer 2006 on this, Hardware available
Changed:
<
<
RICH readout and IPU LVL1 rate with data higher than 30 kHz no timeschedule, but approx. beg. 2007
MDC readout LVL1 rate > 30kHz + 5 MBytes/s / chamber no timeschedule, start approx. end of 2006
MDC cluster search (IPU) ability to combine this information with RICH-IPU-rings no timeschedule, start approx. end of 2006
>
>
RICH FEE LVL1 rate with data higher than 30 kHz, low noise in FEE beg. 2007, Michael Böhmer, project till mid. 2008
MDC readout LVL1 rate > 30kHz + 5 MBytes/s / chamber started end of 2006, MDC-Addon produced, under test
MDC cluster search (IPU) ability to combine this information with RICH-IPU-rings no timeschedule, could be started in March. 2007
 
Compute Node for IPUs "unlimited resources" for IPU algorithms no timeschedule
Additional, new projects
RPC readout and Trigger current LVL1 rate of TRB: 30kHz, LVL2 rate: 1kHz working on this project since a year, new version: beg. 2007
Changed:
<
<
Forward Wall approx. 300 channels, time and amplitude EU-FP6 project
Pion-Hodoscopes approx. 200 channels  
Trigger-Bus in combination with RPC-TRB is currently developped in parallel to TRB2.0
>
>
Forward Wall approx. 300 channels, time and amplitude EU-FP6 project, TRBv1, finished
Pion-Hodoscopes approx. 200 channels TRBv1, finished
Trigger-Bus in combination with RPC-TRB is currently developped in parallel to TRB2.0, TrbNet
 

Additional Projecs, LVL1 trigger
Changed:
<
<
LVL1 trigger enhancment + consolidation with VULOM module possibility for more debugging and better LVL1 trigger no manpower, no timeschedule
>
>
LVL1 trigger enhancment + consolidation with VULOM module possibility for more debugging and better LVL1 trigger M.Kajet. + Davide Leoni, finished in April 2007
 

Many new projects have been started which all have to the aim, to increase the capability of the accepted LVL1 rate, improve on the LVL2 transport bandwidth and increase the LVL2 trigger ratio.
Line: 86 to 86
 

The key components are the following:
Changed:
<
<
  1. In the end we will have only two or three different types of TRB boards for the readout of the FEE, LVL1 and LVL2 pipes. This is one main point we learned from the past times: The zoo of different hardware made the maintainance quite complicated and manpower-intensive.
>
>
  1. In the end we will have only TRBv2 plattform with detector dependent Addon-boards for the readout of the FEE, LVL1 and LVL2 pipes. This is one main point we learned from the past times: The zoo of different hardware made the maintainance quite complicated and manpower-intensive.
 
  1. Data transport is done by Ethernet. No VME-System is needed anymore.
  2. TDC/ADC and readout moves to the FEE. No long cables from the detector to the TDC.
  3. We have a tree structure of the Trigger-System, only point to point links, realized with GB-optical links.
Line: 94 to 94
 
  1. The Trigger-Hubs are just a piece of code in a hardware which is a general purpose compute-node.
Added:
>
>

More detailed Architechture

The full DAQ system will consist of
  • 24 TRBv2 for RPC (LVL2 trigger implemented on DSP on TRBv2)
  • 24 TRBv2 for MDC
  • 6 TRBv2 for TOF
  • 6 or 12 RICH-TRBv2
  • 6 TRBv2 for Shower-Readout
  • 3 TRB for Forward-Wall
  • 3 TRB for Pion-Hodoscope

These boards will be all near the detector and the full raw-data can be transported via the TrbNetwork. If wanted, one can use a Compute-Node instead of a TriggerHub to work on sophisticated LVL2-trigger algorithms.
 

TRB

The TRB is developed and build in the frame of the EU-FP6-contract. This is the official task of Piotr Salabura. There is a large team working on it and we have a tight schedule to fulfill. Please have a look further down for the time-schedule.
Revision 5
02 Feb 2007 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 72 to 69
 
  1. Forward-Wall
  2. MDC readout and hit-finder (without the TDCs on the board)
  3. TOF system, if the FEE of TOF will be exchanged
Changed:
<
<
  1. Mimosa readout for Frankfurt

To be discussed: Maybe even usable for RICH-FEE-readout and Readout-Controler-Functionality (A meeting for this can be held in week 11 2006).

>
>
  1. RICH-FEE-readout and Readout-Controler-Functionality (München agreed on that)
 
Added:
>
>
In the meantime also other experiments are interested in this concept:
  1. For the PANDA gas-chambers
  2. For the CBM-Mimosa readout for Frankfurt
  3. ....
 

Architecture of new DAQ/Trigger-System

Line: 132 to 127
 

The only hard requirement we have to include such a Compute-Node into the proposed Trigger-Scheme is the use of an optical GB-Transceiver at a transfer speed of 2 GBit. Industry standard is SFP: for example 2GBit: V23818_K305_B57 from Infineon (you can get these things from approx. 20 vendors).
Added:
>
>
We use now: FTLF8519P2BNL (35€/piece)
  The link-layer protocol we want to use has some limits imposed by the SerDes-Chips on the TRB. The current choice is the TLK2501 from TI. It is a 1.5 to 2.5 Gbps Transceiver and can directly be connected to the SFP-Transceiver. It uses a 8B/10B encoding to transmit data, but the user uses it just as a 16-bit FIFO, which means that we are limited to a wordlength which has to be a multiple of 16 bits (not really a limit).
Line: 161 to 157
 

Time Schedule for TRB 2.0

(estimated realtime time-schedules, not worktime)
Deleted:
<
<
  1. Marcin could started in Feb. with schematics changes
  2. All discussions about details and schematics should end in April (maybe has to be postponed as we extended the board by the optical link)
  3. Two month allocated for layout by Peter Skott at GSI
  4. 1. July: Production of PCB (FEM, Swizerland)
  5. 15. August: Production of boards (4 at one I would suggest) at GSI
  6. 15. September: Tests of functionality in Krakow
  7. 15. October: Main functionality is given, same as TRB V1.0
  8. 1. November: Implementation of TOF-algorithm in DSP and link to MU in Frankfurt by Ingo
 
Changed:
<
<
If this will be successful, the the "mass" production could start in the beginning February of 2007.
>
>
Due to many different delays, we are at least 6 month behind the schedule (2007-02-02):

  1. ongoing tests with TRBv2a: DSP still under test.
  2. 15. February: submit the updated layout which will be called TRBv2b
  3. 15. March: tests with TRBv2b
  4. 15. April: Main functionality is given, same as TRB V1.0
  5. August: Fished implementation of TOF-algorithm in DSP and link to MU in Frankfurt by Ingo
 
Added:
>
>
If this will be successful, the the "mass" production could start in September 2007.
 

Trigger-Bus-Architechture / IPU-Data paths

Revision 4
20 Jun 2006 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Line: 53 to 53
 
MU + concentrator LVL1 rate higher than 30 kHz Marek can start in Summer 2006 on this, Hardware available
RICH readout and IPU LVL1 rate with data higher than 30 kHz no timeschedule, but approx. beg. 2007
MDC readout LVL1 rate > 30kHz + 5 MBytes/s / chamber no timeschedule, start approx. end of 2006
Added:
>
>
MDC cluster search (IPU) ability to combine this information with RICH-IPU-rings no timeschedule, start approx. end of 2006
 
Compute Node for IPUs "unlimited resources" for IPU algorithms no timeschedule
Additional, new projects
RPC readout and Trigger current LVL1 rate of TRB: 30kHz, LVL2 rate: 1kHz working on this project since a year, new version: beg. 2007
Line: 60 to 61
 
Pion-Hodoscopes approx. 200 channels  
Trigger-Bus in combination with RPC-TRB is currently developped in parallel to TRB2.0
Added:
>
>
Additional Projecs, LVL1 trigger
LVL1 trigger enhancment + consolidation with VULOM module possibility for more debugging and better LVL1 trigger no manpower, no timeschedule
  Many new projects have been started which all have to the aim, to increase the capability of the accepted LVL1 rate, improve on the LVL2 transport bandwidth and increase the LVL2 trigger ratio.

During the work on the RPC-TRB (TDC-Readout-Board), it turned out, that the TRB-concept can be used for many subsystems:
Revision 3
03 Mar 2006 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Changed:
<
<

DAQ Upgrade

>
>

DAQ Upgrade

 

This document gives an overview of why and how we want to upgrade our HADES-DAQ/Trigger-System.
Changed:
<
<

Motivation for an Upgrade

>
>

Motivation for an Upgrade

 

For light systems the DAQ shows approx. the following performance:
Line: 26 to 26
  The performance is not sufficient for larger systems. Therefore, we planned an upgrade of our DAQ-System and are currently working with the "EU-FP6 construction" to improve our system.
Changed:
<
<
The numbers show the following problems:
>
>
The numbers given in the tables show the following problems:
 
  1. The limitation of the LVL1 rate
  2. The main problem is the bad LVL2-Trigger reduction value
Line: 35 to 35
 
  1. Increase LVL1-rate capability
  2. Improve on the LVL2 trigger-algorithm
Changed:
<
<
None of these measures will be sufficient to improve significantly the statistics, but some things are easier to achieve than others.
>
>
None of these measures alone will be sufficient to improve significantly the statistics, but some things are easier to achieve than others.
 
Changed:
<
<

Upgrade Plans/Measures

>
>

Upgrade Plans/Measures

 

The following measures are planned/performed:
Line: 54 to 60
 
Pion-Hodoscopes approx. 200 channels  
Trigger-Bus in combination with RPC-TRB is currently developped in parallel to TRB2.0
Changed:
<
<
Many new projects, which all have to the aim, to increase the capability of the accepted LVL1 rate, improve on the LVL2 transport bandwidth and increase the LVL2 trigger ratio.
>
>
Many new projects have been started which all have to the aim, to increase the capability of the accepted LVL1 rate, improve on the LVL2 transport bandwidth and increase the LVL2 trigger ratio.
 

During the work on the RPC-TRB (TDC-Readout-Board), it turned out, that the TRB-concept can be used for many subsystems:
  1. RPC-Wall
Line: 66 to 72
 

To be discussed: Maybe even usable for RICH-FEE-readout and Readout-Controler-Functionality (A meeting for this can be held in week 11 2006).
Deleted:
<
<

Architechture of new DAQ/Trigger-System

 
Deleted:
<
<
Here I want to show the future architechture of the DAQ-System as discussed in our DAQ-Meeting 2006-02-28.
 
Deleted:
<
<
Most important: The upgrade will be done in a way, so that the whole old
 
Deleted:
<
<
Overview
 
Changed:
<
<

TRB Architechture

>
>

Architecture of new DAQ/Trigger-System

Here I want to show the future architecture of the DAQ-System as discussed in our DAQ-Meeting 2006-02-28.

Most important: The upgrade will be done in a way, so that the whole old system can stay as it is. The will be no time, where we have to change electronics in all subsystems at once, we want to try to do it step by step.

Overview

The key components are the following:

  1. In the end we will have only two or three different types of TRB boards for the readout of the FEE, LVL1 and LVL2 pipes. This is one main point we learned from the past times: The zoo of different hardware made the maintainance quite complicated and manpower-intensive.
  2. Data transport is done by Ethernet. No VME-System is needed anymore.
  3. TDC/ADC and readout moves to the FEE. No long cables from the detector to the TDC.
  4. We have a tree structure of the Trigger-System, only point to point links, realized with GB-optical links.
  5. The same link will be used for IPU-data transport and so for the LVL2 trigger.
  6. The Trigger-Hubs are just a piece of code in a hardware which is a general purpose compute-node.

TRB

The TRB is developed and build in the frame of the EU-FP6-contract. This is the official task of Piotr Salabura. There is a large team working on it and we have a tight schedule to fulfill. Please have a look further down for the time-schedule. This also includes the integration of the optical GB-Transceiver to transport triggers and IPU-data to the Matching Unit (MU)

MU_V2 / MU Concentrator / Trigger-System

Marek Palka will start working on this issue beginning in Summer 2006. The hardware with the optical links and powerfull DSPs is already available since over a year. It is just PLD- and DSP-programming smile .
Some documentation about the MU V2 / concentrator is available here: http://www-linux.gsi.de/~michael/mu2_concept5.pdf

The VHDL-code for the link-layer of the trigger-protocol will be written by Ingo.

Compute Node

Wolfgang Kühn proposed to use a compute node, which will be developed in Gießen. It is a very versatile module. It consists of an array of FPGAs, with a set of IO capabilities. The following picture show a block-diagram of what is intened to be built in Giessen:

Overview, Compute Node

  1. array of 4-16 FPGAs
  2. Gigabit-Ethernet connectors (how many?)
  3. parts of the FPGAs have a PowerPC-Processor included
  4. optical links using the Xilinx-propritary MGTs
  5. memory

IPU/Trigger-Link

The only hard requirement we have to include such a Compute-Node into the proposed Trigger-Scheme is the use of an optical GB-Transceiver at a transfer speed of 2 GBit. Industry standard is SFP: for example 2GBit: V23818_K305_B57 from Infineon (you can get these things from approx. 20 vendors). The link-layer protocol we want to use has some limits imposed by the SerDes-Chips on the TRB. The current choice is the TLK2501 from TI. It is a 1.5 to 2.5 Gbps Transceiver and can directly be connected to the SFP-Transceiver. It uses a 8B/10B encoding to transmit data, but the user uses it just as a 16-bit FIFO, which means that we are limited to a wordlength which has to be a multiple of 16 bits (not really a limit).

TRB Architechture

 

Results from first experiments with the TRB can be found in the GSI Report 2005 or here: http://www-linux.gsi.de/~michael/GSI_2005_TRB.pdf

The TRB V1.0 only has a readout function, no IPU data-path. The block-diagram is shown here:
Changed:
<
<
Overview, TRBV1.0
>
>
Overview, TRBV1.0
 

As there is no IPU link, the next version of the TRB has the following block diagram.
Changed:
<
<
Overview, TRBV2.0
>
>
Overview, TRBV2.0
 
Added:
>
>

Main features of TRB V2.0

 
Changed:
<
<

Main features of TRB V2.0

>
>

Large FPGA

 
Changed:
<
<
Large FPGA
Due to the demand of many pins and the possibility to use the FPGA for the algorithm, we have chosen a Virtex 4 LX100 with 768 user I/Os, 4300*18kBit block RAM, 55k slices. Price: 430€.
>
>
Due to the demand of many pins and the possibility to use the FPGA for the algorithm, we have chosen a Virtex 4 LX40 with 768 user I/Os, 96*18kBit block RAM, 55k slices. Price: 430€.
 
Changed:
<
<
DSP
>
>

DSP

  The DSP will give us the possibility to have a very straight-forward and fast port of the existing TOF-algorithm to the RPC. For other systems, there is no need to put it on the board. The costs are around 150€/piece. We want to use the TigerSharc DSP. Readout Processor We want to use a faster readout processor: ETRAX FS. Approx. 3 times faster than the Etrax MCM.
Changed:
<
<

Time Schedule for TRB 2.0

>
>

Time Schedule for TRB 2.0

  (estimated realtime time-schedules, not worktime)
Changed:
<
<
Marcin could started in Feb. with schematics changes All discussions about details and schematics should end in April (maybe has to be postponed as we extended the board by the optical link) Two month allocated for layout by Peter Skott at GSI 1. July: Production of PCB (FEM, Swizerland) 15. August: Production of boards (4 at one I would suggest) at GSI 15. September: Tests of functionality in Krakow 15. October: Main functionality is given, same as TRB V1.0 1. November: Implementation of TOF-algorithm in DSP and link to MU in Frankfurt by Ingo
>
>
  1. Marcin could started in Feb. with schematics changes
  2. All discussions about details and schematics should end in April (maybe has to be postponed as we extended the board by the optical link)
  3. Two month allocated for layout by Peter Skott at GSI
  4. 1. July: Production of PCB (FEM, Swizerland)
  5. 15. August: Production of boards (4 at one I would suggest) at GSI
  6. 15. September: Tests of functionality in Krakow
  7. 15. October: Main functionality is given, same as TRB V1.0
  8. 1. November: Implementation of TOF-algorithm in DSP and link to MU in Frankfurt by Ingo
  If this will be successful, the the "mass" production could start in the beginning February of 2007.
Changed:
<
<

Trigger-Bus-Architechture / IPU-Data paths

>
>

Trigger-Bus-Architechture / IPU-Data paths

 

The motivation:
Line: 130 to 188
 
META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141394896" name="DAQ_Upgrade_Architecture_overview.jpg" path="DAQ_Upgrade_Architecture_overview.jpg" size="77463" user="MichaelTraxler" version="1.1"
Added:
>
>
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141398971" name="DAQ_Upgrade_Architecture_overview_800x600.jpg" path="DAQ_Upgrade_Architecture_overview_800x600.jpg" size="39375" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141399238" name="DAQ_TRBV10_block_600x500.jpg" path="DAQ_TRBV10_block_600x500.jpg" size="25821" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141399272" name="DAQ_TRBV20_block_600x500.jpg" path="DAQ_TRBV20_block_600x500.jpg" size="30100" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="Compute Node from Giessen" date="1141401411" name="CN_giessen1_500x500.jpg" path="CN_giessen1_500x500.jpg" size="94768" user="MichaelTraxler" version="1.1"
Revision 2
03 Mar 2006 - Main.MichaelTraxler
Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Changed:
<
<

DAQ Upgrade Plans

>
>

DAQ Upgrade

This document gives an overview of why and how we want to upgrade our HADES-DAQ/Trigger-System.

Motivation for an Upgrade

 

For light systems the DAQ shows approx. the following performance:
Line: 23 to 28
 

The numbers show the following problems:
  1. The limitation of the LVL1 rate
Changed:
<
<
  1. The main problem is the bad LVL2 reduction value
>
>
  1. The main problem is the bad LVL2-Trigger reduction value

The following things can be done to get higher statistics:
  1. Increase LVL2-rate capability
  2. Increase LVL1-rate capability
  3. Improve on the LVL2 trigger-algorithm

None of these measures will be sufficient to improve significantly the statistics, but some things are easier to achieve than others.

Upgrade Plans/Measures

The following measures are planned/performed:

measure impact / aim status
VME-Linux CPUs LVL2 rate times 2-4 for MDC and Shower done for all subsystems except RICH (2006-03-03)
TOF-readout and IPU LVL1 rate of TOF system higher than 30 kHz Ingo just started
MU + concentrator LVL1 rate higher than 30 kHz Marek can start in Summer 2006 on this, Hardware available
RICH readout and IPU LVL1 rate with data higher than 30 kHz no timeschedule, but approx. beg. 2007
MDC readout LVL1 rate > 30kHz + 5 MBytes/s / chamber no timeschedule, start approx. end of 2006
Compute Node for IPUs "unlimited resources" for IPU algorithms no timeschedule
Additional, new projects
RPC readout and Trigger current LVL1 rate of TRB: 30kHz, LVL2 rate: 1kHz working on this project since a year, new version: beg. 2007
Forward Wall approx. 300 channels, time and amplitude EU-FP6 project
Pion-Hodoscopes approx. 200 channels  
Trigger-Bus in combination with RPC-TRB is currently developped in parallel to TRB2.0

Many new projects, which all have to the aim, to increase the capability of the accepted LVL1 rate, improve on the LVL2 transport bandwidth and increase the LVL2 trigger ratio.

During the work on the RPC-TRB (TDC-Readout-Board), it turned out, that the TRB-concept can be used for many subsystems:
  1. RPC-Wall
  2. Pion-Hodoscopes
  3. Forward-Wall
  4. MDC readout and hit-finder (without the TDCs on the board)
  5. TOF system, if the FEE of TOF will be exchanged
  6. Mimosa readout for Frankfurt

To be discussed: Maybe even usable for RICH-FEE-readout and Readout-Controler-Functionality (A meeting for this can be held in week 11 2006).

Architechture of new DAQ/Trigger-System

Here I want to show the future architechture of the DAQ-System as discussed in our DAQ-Meeting 2006-02-28.

Most important: The upgrade will be done in a way, so that the whole old

Overview

TRB Architechture

Results from first experiments with the TRB can be found in the GSI Report 2005 or here: http://www-linux.gsi.de/~michael/GSI_2005_TRB.pdf

The TRB V1.0 only has a readout function, no IPU data-path. The block-diagram is shown here:

Overview, TRBV1.0

As there is no IPU link, the next version of the TRB has the following block diagram.

Overview, TRBV2.0

Main features of TRB V2.0

Large FPGA
Due to the demand of many pins and the possibility to use the FPGA for the algorithm, we have chosen a Virtex 4 LX100 with 768 user I/Os, 4300*18kBit block RAM, 55k slices. Price: 430€.

DSP
The DSP will give us the possibility to have a very straight-forward and fast port of the existing TOF-algorithm to the RPC. For other systems, there is no need to put it on the board. The costs are around 150€/piece. We want to use the TigerSharc DSP. Readout Processor We want to use a faster readout processor: ETRAX FS. Approx. 3 times faster than the Etrax MCM.

Time Schedule for TRB 2.0

(estimated realtime time-schedules, not worktime) Marcin could started in Feb. with schematics changes All discussions about details and schematics should end in April (maybe has to be postponed as we extended the board by the optical link) Two month allocated for layout by Peter Skott at GSI 1. July: Production of PCB (FEM, Swizerland) 15. August: Production of boards (4 at one I would suggest) at GSI 15. September: Tests of functionality in Krakow 15. October: Main functionality is given, same as TRB V1.0 1. November: Implementation of TOF-algorithm in DSP and link to MU in Frankfurt by Ingo If this will be successful, the the "mass" production could start in the beginning February of 2007.

Trigger-Bus-Architechture / IPU-Data paths

The motivation:

  1. scalability: the current trigger-bus is not scalable, as we can not connect many systems to it. Maximum distance approx. 50 m and ground-shift problems are topics.
  2. maintainability/reliability: experience has shown, that the current cable connections are not reliable, cables are too big and heavy, very expensive, not easy to extend.
 
Added:
>
>
So, we need a very fast (Trigger and Busy), reliable point-to-point connection. The best solution is an optical 2 GBit-Transceiver. As we need the same thing for the IPU-Data-Paths to the MU/ComputeNodes, we want to uses the same link.
 
Changed:
<
<

Architechture

>
>
Please have a look to Ingos documentation. http://hades-wiki.gsi.de/cgi-bin/view/DaqSlowControl/NewTriggerBus
 
Deleted:
<
<
Here I want to
 

-- MichaelTraxler - 02 Mar 2006
Added:
>
>
META FILEATTACHMENT attr="h" comment="TRB V1.0 block diagram" date="1141391706" name="DAQ_TRBV10_block.jpg" path="DAQ_TRBV10_block.jpg" size="44987" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="TRB V2.0 block diagram" date="1141392434" name="DAQ_TRBV20_block.jpg" path="DAQ_TRBV20_block.jpg" size="52339" user="MichaelTraxler" version="1.1"
META FILEATTACHMENT attr="h" comment="DAQ Architecture" date="1141394896" name="DAQ_Upgrade_Architecture_overview.jpg" path="DAQ_Upgrade_Architecture_overview.jpg" size="77463" user="MichaelTraxler" version="1.1"
 
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Hades Wiki? Send feedback
Imprint (in German)
Privacy Policy (in German)