Id: 1
Corresponding Author: Attila RACZ
Experiment: CMS
Sub-system: DAQ
Topic: Detector control and real time systems
Trigger Throttling System for CMS DAQ
A. Racz / CERN-EP
Abstract:
This document is a first attempt to define the basic functionnalities of the TTS in the CMS DAQ. Its role is to adapt the trigger pace to the DAQ capacity in order to avoid congestions and overflows at any stage of the readout chain. The different possibilities for the TTS to measure the load on parts of the chain are examined. It clearly appears that one part of the chain needs fast reaction time (few tens of useconds) whereas the rest of the chain can afford longer reaction time, available to nowadays processors.
Summary:
The role of the CMS Trigger Throttling System has been described in global terms. The situation regarding the different stages of the data acquisition chain is quite different. Intrinsically, the most problematic parts are the front-end systems where full custom solutions must be developed. For the rest of the chain, standard and well known solutions can be used.
After this first analysis, it has been shown that the TTS can be split logically and physically into two parts:
- a first one featuring quick reaction time, custom hardware, located in the global trigger logic
- -a second one with slower reaction time, running on the BM processors
Finally, depending on the overflow recovery procedure, the DAQ availability time can be reduced by an inacceptable factor.
ID: 15
Corresponding Author: Piotr KULINICH
Experiment: General Interest
Sub-system: DAQ
Topic: Detector Control And Real Time Systems
Silicon DAQ based on FPDP and RACEway
PHOBOS collaboration
Abstract:
DAQ for Si-detector of PHOBOS setup (RHIC) with Scalable Power for read out and Zero Suppression is described. Data from VA-HDR chips with analog multiplexor, are digitized by FADC. Digital buffers are multiplexed by DMU modules at speed 100 MBytes/sec and transmitted through FPDP and virtual extender of FPDP to fiber (FFI).
At the receiver end (in counting house) data from fiber are distributed between a number of dedicated processors (in RACEway multiprocessor frame) for Zero Suppression. After ZS data are concatenated and transmitted to Event Builder.
Summary:
Read out and Zero Suppression of Si-detector of PHOBOS experiment at RHIC is described.
Data from VA-HDR chips with analog multiplexor, are digitized by FADC. Digital buffers are multiplexed by DMU modules at speed 100MBytes/sec and transmitted through FPDP and virtual extender of FPDP to fiber (FFI).
As an interface for interconnection a Front Panel Data Port is used ( FPDP - a 160 Mbytes/s Front-Panel Data Port ). This is defacto standard in high data rate read-out and on February 11, 1999, FPDP was approved as an American National Standard Institute Standard, ANSI/VITA 17.
A number of firms supply products in that standard now, and there are few "extenders" which use Fiber Channel (ICS-7240, AAEC: FFOIB ) or HIPPI-Serial or G-link interfaces for FPDP.
FPDP is 32-bits synchronous data interface (with clock up to 40 MHz)
It's relatively simple, no backplane requires.
It could be "bussed" (up to 20-30 modules with FPDP could be connected to a 80-wires cable)
It doesn't require "software" control.
It's quite natural for FIFO read-out.
In the case of PHOBOS DAQ use of FPDP is attractive because it permits to make constructive block relatively simple and independent from fiber link. At stage of testing, FPDP cable could be connected without fiber link to RIN-T daughter-card of "Mercury" FPDP/RACEway interface).
Custom designed FFIs (Fiber FPDP link Interface) are used as an "virtual" extenders of parallel FPDP bus. HP's G-link and optical module FTR-8510 are main components of serial data transfer, and control logic is implemented in two fast ispLSI-2128. FFI module is built on VME-like board and uses only "+5V" and "Ground" lines. So it could reside either in standard VME crate or in custom crate. In the counting house we need RACEway access, so FFI is connected to RIN-T and ROU-T boards in MERCURY crate. In at the front end MDB crate FFI is connected to MDC (control unit) and DMUs.
At the receiver end (in counting house) data buffers from fiber interface (FFI) are distributed between a number of dedicated processors (in RACEway multiprocessor frame) for Zero Suppression. After ZS data are concatenated and transmitted to Event Builder.
Such approach permits to scale the power/speed of ZSS by changing the number of fibers and/or number of processors. It permits to use C language for ZS code, and to use different algorithms for different parts of detector.
Id: 54
Corresponding Author: Guido MAGAZZU
Experiment: CMS
Sub-system: Tracker
Topic: Detector Control And Real Time Systems
The Detector Control Unit: an ASIC for environmental monitoring in the CMS central tracker
Guido Magazzu' - INFN Sezione di Pisa
Alessandro Marchioro - CERN EP-MIC
Paulo Moreira - CERN EP-MIC
Abstract:
The readout system of the CMS central tracker performs several functions: readout of the data from the front-end ASICs, distribution of the timing and trigger signals, distribution and collection of the slow control and status information and collection of local environmental parameters. The DCU (Detector Control Unit) is an integrated circuit which monitors parameters such as the leakage current in the silicon detectors, local voltages and temperatures. All these measurements can be performed by one analog multiplexer followed by a A/D converter interfacing to the slow control system. Such functions could easily be performed by a number of commercial devices, but the constraints of radiation tolerance, low power and maximum integration lead us to design a special integrated circuit which will be here described.
Summary:
Silicon microstrip detectors, when exposed to the high level of radiations of the LHC, are subject to a number of damaging phenomena demanding a careful monitoring of their environmental conditions. To assure proper operation over the expected 10 years lifetime, one has to guarantee that the leakage currents in the microstrips do not exceed certain values and - to avoid reverse annealing phenomena - that the detector is kept at a conveniently low temperature during the whole lifetime. The vital quantities that one needs to monitor close to the silicon strip detector and to the front end modules are therefore the leakage currents of the silicon detectors in the range of 100uA to 10mA and the temperature of the detectors themselves (which can be fairly easily sensed through the utilization of appropriate thermistors) in the range -20 to 20 deg C with a precision of about one degree. Such quantities need to be read and logged with relatively low frequency, therefore a fast conversion time is not important. The hardware necessary for the monitoring of these quantities allows also to monitor other environmental parameters, for instance the local supply voltages, the temperature of the high density hybrid housing the front-end integrated circuits etc. The Detector Control Unit (DCU) perform all these functions in one single integrated circuit. This integrated circuit consists basically of a 12-bit A/D converter which uses a single slope architecture, preceded by an 8 input analog multiplexer. One input is reserved for an on-chip temperature sensor, which measures the temperature of the substrate onto which the chip is mounted, and seven other inputs which are available to measure voltages in the range -1.0 to +1.0 V (almost rail-to-rail). The A/D conversion time is ~ 1 ms and the analog reference to the A/D consists of an on-chip bandgap reference block. As the external temperature sensors are essentially resistors and the input of the DCU is capable of reading voltages, a temperature independent, stable current reference output is also made available from the chip. The DCU is interfaced to the tracker control system via a standard I2C port, through which the user can select one multiplexer input out of the 8 available, start a conversion in the A/D and read the conversion result. The DCU ASIC is designed in a commercial quarter micron technology using special layout techniques to enhance its radiation tolerance. The total chip area measures about 2.0 x 2.0 mm2, has 28 pins and the power consumption is estimated to be less than 50 mW. The digital part of the chip uses triple redundancy and voting to insure protection against SEU effects. To achieve almost rail-to-rail input compatibility, the analog circuitry uses some complementary solution based on double Nmos and Pmos transistors and has an automatic offset cancellation feature. The circuit has been submitted to fabrication and the measured results will be presented.
Id: 60
Corresponding Author: Kazuaki ANRAKU
Experiment: ATLAS
Sub-system: DAQ
Topic: Detector Control And Real Time Systems
Possibility of SR8000 Supercomputer for ATLAS DAQ Event Building and Event Filtering
ANRAKU, Kazuaki (ICEPP, Univ. of Tokyo), IMORI, Masatosi (ICEPP, Univ. of Tokyo)
Abstract:
We are investigating the possiblity of adapting the SR8000 supercomputer system by Hitachi for ATLAS DAQ event building and event filtering. The SR8000 system is comprised of a number (up to 128) of nodes, each of which has RISC microprocessors sharing a main memory, and of high speed "multi-dimensional" inter-node network. The maximum total processing power amounts to 1024 GFLOPS and a bidirectional transfer rate of the inter-node network is 2 Gbyte/s. An arbitrary number of nodes can have I/O adapters of HIPPI, ATM, Ethernet, and Fast Ethernet. These features seem to be suitable to both the ATLAS DAQ event builder and event filter.
Summary:
The "Supertechnical Server" SR8000 system by Hitachi, Ltd. is a parallel processing computer system comprised of a variable number (up to 128) of nodes, each of which has 64-bit RISC microprocessors sharing a main memory, and of high speed "multi-dimensional" inter-node network. Each node has a maximum processing power of 8 GFLOPS and a maximum main memory of 8 GB, resulting in the maximum total processing power amounts to 1024 GFLOPS. The nodes are connected to each other by three-dimensional "crossbar network" with an unidirectional transfer rate of 1 Gbyte/s and a bidirectional transfer rate of 2 Gbyte/s. A cooperative microprocessors architecture in each node and pseudo-vector processing in each processor, together with the high speed inter-node communication, realize the high performance.
The nodes are classified to three types; supervisory node (SVN) unique to the system and controlling the whole system, I/O node (ION) processing and input/output operating, and processing node (PRN) processing only. Any node except for the SVN can be selected to be an ION or a PRN. IONs and SVN are equipped with I/O adapters to connect to I/O devices and/or outer industrial standard networks; HIPPI, ATM, Fast Ethernet, and Ethernet.
By feeding the detector data fragments from the ReadOut Crates (ROCs) via EventBuilder Interfaces (EBIFs) into the corresponding I/O nodes and exchanging data between the processing nodes, the SR8000 system can possibly work well as an event builder instead of a switching network implemented in the DAQ/EF -1 prototype. In addition to the event building, the maximum total processing power of a single SR8000 system is supposed to well fulfil the estimated minimum required processing power of the event filtering of 10^6 MIPS.
Supposing the SR8000 system to be a promising candidate for the event builder and event filter, we are now investigating the possiblity and feasibility of adapting the SR8000 system for the DAQ system.
Id: 89
Corresponding Author: Giacinto De CATALDO
Experiment: ALICE
Sub-system: DAQ
Topic: Detector Control And Real Time Systems
The Detector Control System for the HMPID in ALICE Experiment at LHC
G. De Cataldo for the ALICE collaboration,
INFN Bari, Italy
(email: giacinto.de.cataldo@cern.ch)
Abstract:
The Detector Control System (DCS) of
ALICE at LHC will allow a hierarchical consolidation of the participating
sub-detectors to obtain a fully integrated detector operation.
The High Momentum Particle Identification
Detector (HMPID), based on a Ring Imaging Cherenkov, is one of the ALICE
sub-detectors. Its DCS has to ensure the detector configuration, operation
in standalone mode for maintenance, monitoring, control and integration
in the ALICE DCS.
In this paper a status report of the
HMPID DCS is presented. Costs and merits of its implementation in function
of the chosen HV and LV systems will also be reported.
Summary:
The detector for LHC experiments will be installed in underground caverns. This removes the possibility of local interventions during the operation of the LHC accelerator. Consequently remote access becomes a primary condition, and in order to operate and control such complex detector an efficient DCS will be mandatory.
From the DCS point of view, the HMPID
consists of 4 sub-systems, each one with parameters to be set and/or read
out. These sub-systems are:
- LV power supply system,
- HV power supply system,
- Gas system for the multiwire proportional
chamber,
- Liquid circulating system for the
Cherenkov radiators.
The HMPID DCS, is structured in three
well defined layers: process layer, control layer and supervisory layer.
The first one consist of sensors, actuators and custom hardware (FEE, LV..);
the second consists of digital-analogue modules interfacing the process
layer and supervised by control computer equipment of type PLC (Programmable
Logic Controller) connected by a dedicated general purpose LAN, i.e. Ethernet
and TCP/IP. The third one consists of a software system based on a server/client
model. It is finalised to configure, control and operate the HMPID either
integrated in the ALICE DCS or in standalone mode for maintenance and upgrading
of the detector.
Since the high number of parameters
to deal with in the ALICE DCS, according to the JCOP recommendations, the
supervisory software should be based on an industrial product (under selection)
running on workstation with NT or LINUX O.S.. Consequently the HMPID DCS
will be also based on the same product in order to import it easily in
the ALICE DCS.
Whilst we are already running DCS prototypes
of the liquid and gas systems, at present the crucial sub-system to be
integrated in the HMPID DCS is the LV power supply system.
Some reliable commercial solutions
with an OPC server, supporting TCP/IP protocol and matching the electronics
power consumption are available, but their costs seems to be rather high
if compared with a custom solution.
In the last case however, the custom
auxiliary electronics to ensure voltage, current sensing and LV channel
switching, would require a non-standard maintenance compared to what is
ensured on long term operation by companies which supply crates with proper
connectivity and LV modules with complete remote control. Therefore after
a market survey we are inclined to adopt a commercial solution based on
the CAEN SY1527 (or 527) system as HV-LV power supply system.
Id: 95
Corresponding Author: Eric CANO
Experiment: CMS
Sub-system: DAQ
Topic: Detector Control And Real Time Systems
Software developments for the Readout Unit Prototypes for CMS DAQ System
M.Bellato (INFN Sezione di Padova)
G.Antchev, E.Cano, S. Cittolin, B.Faure, D.Gigi,
J.Gutleber, C.Jacobs, F. Meijers, E. Meschi, L.Orsini,
L. Pollet, A.Racz,D. Samyn, N. Sinanis,W.
Schleifer, P. Sphicas (CERN)
A.Ninane (Université Catholique de
Louvain)
Abstract :
In the CMS data acquisition system, the readout unit is a fast buffering device for short term storage of event fragments. It interfaces front end devices and builder data network.
The current Readout Unit prototypes are based on two homegrown hardware boards, the Readout Unit Memory (RUM) and the Readout Unit I/O (RUIO). These boards are equipped with an IOP. Several OS environments for this processor are developed. The software running on those boards will have to setup and control the input and output processes. Fast IOP to host communications are experimented. A software test environment is specifically designed for test and validation of the complex memory management of the RUM.
Summary :
The RUIO and RUM prototypes both include a PLX IOP480, with a PowerPC core. Thoses IOPs are connected to the host (any PCI workstation) through a PCI bridge. The PCI bridge also allows communication from IOP to IOP.
The IOP processor needed an operating system. Therefore, VxWorks is ported to the RUIO environment, and to the RUM environment. An experimental Linux port is also in progress.
The purpose of the IOP is to setup and control the RUM board, and the link elements. In prototyping environments, the IOP on RUIO or RUM can simulate part of the data acquisition system, in order to test individual parts of the RU or event builder. In this context, the host can have a role, and therefore, fast communication between host and RUIO is tested thanks to the hardware FIFOs in the PCI part of the RUIO.
The test software is based on a generic pci
board framework. This framework provides cross plateform development capabilities,
with very little porting effort from plateform to plateform. An additional
GUI is developed with Labview. The supported plateforms are, from now,
MacOS, Linux and VxWorks