Low Voltage Control for the Liquid Argon Hadronic End-Cap Calorimeter of ATLAS

H.Brettel*, W.D.Cwienk, J.Fent, H.Oberlack, P.Schacht
MAX-PLANCK-INSTITUT FUER PHYSIK
Werner-Heisenberg-Institut
Foehringer Ring 6, D-80805 Muenchen
<brettel@mppmu.mpg.de>

Abstract

The strategy of the ATLAS collaboration foresees a SCADA system for the slow control and survey of all sub-detectors. As software PVSS2 has been chosen and for the hardware links a CanBus system is proposed.

For the Hadronic Endcaps of the Liquid Argon Calorimeter the control system for the low voltage supplies is based on this concept. The 320 preamplifier and summing boards, containing the cold front-end chips, can be switched on and off individually or in groups. The voltages, currents and temperatures are measured and stored in a database. Error messages about over-current or wrong output voltages are delivered.

*Corresponding author, E-mail: brettel@mppmu.mpg.de

Summary

The slow control of sub-detectors and components of ATLAS is realized by a SCADA software installed in a computer net. At present it is PVSS2 from the Austrian company ETM.

Links between net nodes and hardware are realized in different ways. Between the last node and the detector electronics a CanBus is foreseen in some cases for the transfer of control signals and the survey of temperatures, supply voltages and currents.

An example are the Hadronic End Caps of the Liquid Argon Calorimeter. The application software in a PC, called PVSS2-project, has a connection to the CanBus via a driver software OPC and an interface board NICAN2 and acts as bus master. CanBus slaves are offered by the industry for several purposes. We use the ELMB from the CERN DCS group, which is tailored to our needs. It has 2 microprocessors inside and digital and analog I/O ports.

Each of the two HEC-wheels consists of 4 quadrants served by a feed-through with a front-end crate on top of it. The low voltages for 40 PSBs , the preamplifier and summing boards which contain the cold GaAs front-end chips, are delivered by a power box, installed between the fingers of the Tile Calorimeter, about half a meter away from the crates.

The input for a power box – a DC voltage in the range of 200 to 300V – is transformed into +8, +4 and -2V at the 3 output lines by DC/DC converters. At 2 control boards the lines are split into 40 channels, one for the supply of each PSB. Integrated low voltage regulators for each power line offer the possibility for individual adjustment and ON/OFF control. We use L4913 and L7913 from STm. The ELMBs and logic chips are mounted also on the control boards and establish the connection between the regulators and the CanBus.

An ELMB has 8-bit digital I/O ports. In order to make the system architecture as simple as possible and increase reliability, only 5 of the 8 bits are used. One ELMB controls 5 PSBs which belong to the same longitudinal end-cap segment. The consequence is: if an ELMB would fail, only one longitudinal segment is affectedd..

The low voltage regulators have a current limitation. The maximal current is adjusted to such a low value, that the wires in the feed-through cannot be damaged in case of a steady short circuit inside the cryostat. In addition, in case of an over-current error signal from one regulator, the logic on the control board will switch off immediately all 3 low voltage regulators related to this channel. Then the control program is informed (via the CanBus) about details of the problem.

Meanwhile tests of hard and software prototypes have been carried out successfully. The work on control boards is progressing and we are gaining more experience with the PVSS2 software.We are considering also an emergency control system, independent of the CanBus, for the case that a computer or the bus itself would fail.


The Embedded Local Monitor Board in the LHC Detector Front-end I/O Control System

B. Hallgren, H. Burckhart, H. Kvedalen

CERN, EP-ATI/CS
Bjorn.Inge.Hallgren@cern.ch
Hallvard.Kvedalen@cern.ch
Helfried.Burckhart@cern.ch

Abstract

The ELMB is a plug-in board to be used in LHC detectors as a general-purpose system for the front-end control and monitoring. It is based on CANbus, is radiation tolerant and can be used in magnetic fields. Results of the radiation tests will be presented and examples of applications will be described.

Summary

A versatile general-purpose system for the front-end detector control, the Local Monitor Box (LMB) was designed in 1998 and tested by the ATLAS sub-detector groups in test-beam and other applications. With this experience and to match better all the needs of the ATLAS sub-detector groups a modified version, the Embedded Local Monitor Board (ELMB) was designed. The main feature of the ELMB is that the ELMB now comes in the form of a general- purpose plug-in board of the size 50x67mm. The board can either be directly plugged onto the sub-detector front-end electronics, or onto a general-purpose motherboard which adapts the analog I/O signals. In order to make the ELMB available to ATLAS and other LHC experiments, which have also expressed interest, a small scale production of 300 boards for evaluation has been made by the CERN EP-ESS group in spring 2001.

The ELMB is based on the ATMEL low-power RISC microcontroller ATmega103. A second microcontroller AT90S2313 performs in system programming and monitoring functions including radiation Single Event Effects. A separate controller the Infineon SAE81C91 is used for the CANbus. The CANopen protocol has been chosen as high-level software. The ELMB can be powered remotely with the help of three low-drop power regulators. The power needed is 5V, 20 mA for the CANbus, 3.3V, 15 mA for the microcontrollers and 5V, 10mA for the ADC. The regulators function also as low-pass filters and provide current limitation and thermal protection for the ELMB. On the backside of the PCB are two high-density connectors of SMD type and optionally a 16+7 bit delta-sigma ADC with 64 differential inputs. There are up to 34 digital I/O lines available. The ATmega103 runs at a clock speed of 4 MHz. It has 128 kbytes of on-chip flash memory, 4 kbytes of SRAM and 4 kbytes of EEPROM. Also when the ELMB is installed in the detector it is possible to program the flash memory of the processors using the In-System Programming feature of the processors via the CAN bus.

A motherboard is available in order to evaluate the ELMB and for non-embedded applications. On the backside it contains two 100-pin SMD connectors for the ELMB and sockets for adapters for the 64 channel ADC. There are adapters available for different temperature sensors like NTC resistors, 2 wire Pt1000 and 4 wire Pt100 sensors. The motherboard may be mounted in DIN rail housing of the size 80 x 190 mm. On the front side there are connectors for the ADC inputs, digital ports, a SPI interface, CAN interface and power connectors.

The environmental requirements are such that it can be used in the ATLAS cavern (USA15) outside of the calorimeter in the area of the muon detectors (MDTs) and further out. This implies tolerance to radiation up to about 5 Gy and 3E10 neutrons/cm2 for a period of 10 years and to a magnetic field up to 1.5 T. Several radiation tests of the ELMB have been made following the procedures as laid out by the ATLAS radiation policy. The results of TID tests made at Pagure, Saclay and GIF, CERN with different dose rates will be reported. Further neutron testing at Prospero and SEE tests at Cyclone/Louvain-la-Neuve are planned for spring and summer 2001. The ELMB was also tested for a week at 100 degrees for accelerated ageing corresponding to 40000h at 25 degree C.


A 10uV-offset DMILL opamp for ATLAS LAr calorimeter

C. de La Taille, J.P. Richer, N. Seguin-Moreau, L. Serin LAL Orsay FRANCE

Christophe de LA TAILLE
Laboratoire de l'Accelerateur Lineaire
Centre d'Orsay - bat
F 91 898 ORSAY Cedex
taille@lal.in2p3.fr
( : (33) 1 64 46 89 39
Fax: (33) 1 64 46 89 34

Abstract

In order to calibrate the LAr calorimeter to 0.25% accuracy, precision pulsers have been designed to provide a fast and precise pulse that simulates the detector pulse over its full 16bit dynamic range. They are based on a precision DC current source (2mA-200mA) built with a low offset opamp and a 0.1% 5ohm external resistor. Several COTs having failed the irradiation tests, a custom chip has been designed and fabricated, first in AMS 0.8um BiCMOS and then in DMILL. It has been successfully tested and the electrical performance and irradiation results will be shown."


TTCPR: A PMC RECEIVER FOR TTC

John W. Dawson, David J. Francis, William N. Haberichter,and James L. Schlereth

Argonne National Laboratory and CERN
John.Dawson@cern.ch

The TTCPR receiver is a mezzanine card intended for use in distributing TTC information to Data Acquisition and Trigger Crates in the Atlas Prototype Integration activities. An original prototype run of these cards was built for testbeam and integration studies, implemented in both the PMC and PCI form factors, using the TTCrx chips from the previous manufacture. When the new TTCrx chips became available, the TTCPR was redesigned to take advantage of the availability and enhanced features of the new TTCrx, and a run of 20 PMC cards was manufactured, and has since been used in integration studies and the testbeam. The TTCPR uses the AMCC 5933 to manage the PCI port, an Altera 10K30A to provide all the logic so that the functionality may be easily altered, and provides a 4K deep FIFO to retain TTC data for subsequent DMA through the PCI port. In addition to DMA's which are mastered by the Add On logic, communication through PCI is accomplished via mailboxes, interrupts, and the Pass-thru feature of the 5933. An interface to the I2C bus of the TTCrx is provided so that internal registers may be accessed, and the card supports reinitialization of the TTCrx from PCI. Software has been developed to suport operation of the TTCPR under both LynxOS and Linux.


A Remote Control System for On-Detector VME Modules of the ATLAS Endcap Muon Trigger

Author List:
K. Hasuko(1), C. Fukunaga(5), R. Ichimiya(3), M. Ikeno(2), Y. Ishida(5), H. Kano(5), Y. Katori(1), T. Kobayashi(1), H. Kurashige(3), K. Mizouchi(4), Y. Nakamura(1), H. Sakamoto(4), O. Sasaki(2) and K. Tanaka(5)

(1) International Center for Elementary Particle Physics (ICEPP), University of Tokyo
(2) High Energy Accelerator Research Organization (KEK)
(3) Department of Physics, Kobe University
(4) Department of Physics, Kyoto University
(5) Department of Physics, Tokyo Metropolitan University

University of Tokyo
7-3-1 Hongo
Bunkyo-ku, Tokyo 113-0033, JAPAN

hasuko@icepp.s.u-tokyo.ac.jp
www.icepp.s.u-tokyo.ac.jp/~hasuko

Abstract

We present the development of a remote control system for on-detector VME modules of the ATLAS endcap muon trigger. The system consists of a local controller in an on-detector VME crate and a remote interface in a Readout Driver crate. The controller and interface are connected with dedicated optical links based on G-LINK. The control system can fully configure and control modules, especially FPGA-embedded ones using G-LINK words and VME bus from remote host. The system supports periodical readback and reconfiguration to assure correct configuration data against SEUs. The idea, prototype and initial performance tests of the system are discussed.

Summary

We present the development of a remote control system for on-detector VME modules of the ATLAS endcap muon trigger. Such the VME modules are Star Switch (SSW) and High-pT board (HPT); the remote control system fully controls and configures them from outside of detector.

SSW is a relay module between on-detector modules and Readout Driver (ROD). SSW receives hit information from readout buffers in the modules, performs data reduction and formatting, and transfers the results to ROD. SSW is based on FPGAs which should be configured and controlled by the remote control system. Since the configuration data are susceptible to radiation-induced upsets (SEUs), dedicated control and configuration links are necessary, and every configuration should be assured to be correct via the links.

The other module, HPT, is a part of trigger system.HPT is based on ASIC configured with some registers inside and fully controlled via the same links from outside.

The remote control system consists of HPT/SSW Controller (HSC) and Control/Configuration Interface (CCI). HSC is a local controller in a HPT/SSW VME crate. CCI is a remote interface in a ROD crate. HSC communicates with the ROD host via CCI. HSC and CCI are connected with optical links based on G-LINK. The links are dedicated for control and configuration. The host manages an instruction set; each instruction is encoded into 14-bit G-LINK control word and executed on HSC. When downloading data, additional 16-bit data words are used. Following instructions, HSC can master the VME bus to access the HPT/SSW modules through VME protocol encoders implemented in CPLDs on both HSC and HPT/SSW modules. All the control and configuration are performed via VME accesses.

FPGAs in SSWs are also configured via the VME bus using byte-based configuration scheme. To resist SEUs, configuration data are periodically read back. Once a SEU is detected, the accessed FPGA is instantly reconfigured.

The CPLDs as VME protocol encoders are configured with JTAG provided on the VME backplane. These JTAG signals are in a bus structure and mastered by an embedded JTAG controller on HSC with dedicated instructions. Most of the HSC functionalities are also built using CPLDs; they are configured with the same JTAG bus. Only the core part of instruction encoders on HSC manages the JTAG-related instructions and built using an ASIC to resist SEUs. Therefore all the related CPLDs are configurable using the JTAG provided via the ASIC encoder.

The detailed idea, prototype and initial performance tests of the HSC/CCI control system are discussed in this workshop. We will also discuss that the performance meets the requirements for controlling and configuration of HPT and SSW modules.


Development of a Detector Control System for the ATLAS Pixel Detector

G. Hallewell, Centre de Physique des Particules de Marseille
S. Kersten, University Wuppertal
Susanne.Kersten@cern.ch

Abstract

The pixel detector of the ATLAS experiment at the CERN LHC will contain around 1750 individual detector modules. The high power density of the electronics - requiring an extremely efficient cooling system – together with the harsh radiation environment constrains the design of the detector control system.

An evaporative fluorocarbon system has been chosen to cool the detector. Since irradiated sensors can be irreparably damaged by heating up, great emphasis has been placed on the safety of the connections between the cooling system and the power supplies. An interlock box has been developed for this purpose, and has been tested in prototype form with the evaporative cooling system.

We report on the status of the evaporative cooling system, on the plans for the detector control system and upon the performance and irradiation tests of the interlock box.


Production and Radiation Tests of A TDC LSI for the ATLAS Muon Detector

Authors :
Yasuo Arai
KEK, National High Energy Accelerator Research Organization
Institute of Particle and Nuclear Studies
1-1 Oho, Tsukuba, Ibaraki 305-0801, JAPAN
( +81-298-64-5366, fax +81-298-64-2580
yasuo.arai@kek.jp
and
T. Emura
Tokyo University of Agriculture and Technology

Abstract

ATLAS Muon TDC (AMT) LSI has been successfully developed and performance of a prototype chip (AMT-1) was reported in the LEB 2000. A new AMT chip (AMT-2) was developed aiming for mass production. The AMTs were processed in a 0.3 um CMOS Gate-Array technology, To proceed to a mass production of 400 k channels (~17,000 chips) scheduled in 2002, a systematic test methods must be established. Furthermore, the chip must be qualified to have adequate radiation tolerance in ATLAS environment. The test method and results of the radiation tests for gamma rays and charged particles will be presented.

Summary

A TDC LSI for the ATLAS precision muon tracker (MDT) has been developed. The TDC chip, called AMT, was processed in a 0.3 um CMOS Gate-Array technology. It contains 24 input channels, 256 words level 1 buffer, 8 words trigger FIFO and 64 words readout FIFO. It also includes trigger-matching circuit, which selects data according to a trigger. The selected data are transferred through 40 Mbps serial line. By using a Phase Locked Loop (PLL) circuit, it achieved 300 ps timing resolution. The chip is packaged in a 144 pins plastic QFP with 0.5 mm pin pitch and about 110k gates are used.

A prototype chip, AMT-1 was successfully tested and reported in the last LEB workshop. The AMT-1 was mounted in a front-end PC board with ASD (Amp/Shaper/Discri) chips, and system tests connected to a detector have been done (submitted in this workshop).

A mass-production prototype chip, AMT-2, has been recently developed. Although the AMT-1 was successfully operated, it consumed relatively large power in inside LVDS receiver circuits. A low-power LVDS receiver was developed and included in the AMT-2. In addition, testability was enhanced and several minor bugs are also fixed.

Mass production of 400 k channels (~17,000 chips) are scheduled in an early period of 2002. Most of the chip tests are done in manufacture, but a systematic test system are still needed. Furthermore the chip must be qualified to have adequate radiation tolerance in ATLAS environment. Gamma-ray irradiation to measure Total Ionization Damage (TID) and proton irradiation to measure Single Event Effects (SEE) are planned.

Test methods and results of the radiation tests will be presented.


On the developments of the Read Out Driver for the ATLAS Tile Calorimeter

Authors: Jose Castelo, Vicente Gonzalez, Enrique Sanchis
IFIC and Dpt of Electronic Engineering. University of Valencia

Vicente Gonzalez
DSDC - Grupo de Diseño de Sistemas Digitales y de Comunicación
Dept. Ingenieria Electronica. Universitat de Valencia
vicente.gonzalez@uv.es

Abstract

"This works describes the present status and future evolution of the Read Out Driver for the ATLAS Tile Calorimeter. The developments currently under execution include the test of the adapted LAr ROD to Tile Cal needs and the design and implementation of the PMC board for algorithm testing at ATLAS rates. We will describe the test performed at University of Valencia with the LAr ROD motherboard and a new developped transition module with 4 SLINK inputs and one output which match the initial TileCal segmentation for RODs. We will also describe the work going on with the design of a DSP based PMC with SLINK input for real time data processing to be used as a test environment for optimal filtering."


Development of Radiation Hardened DC-DC converters for the ATLAS Liquid Argon Calorimeter

Helio Takai and James Kierstead
Brookhaven National Laboratory
takai@bnl.gov

The power supplies for the ATLAS liquid argon calorimeter using 300V input DC-DC converters will be located in a high radiation environment. Over the life of the experiment (i.e. 10 years) the total ionizing dose is expected to reach 25 krad. Along with the total dose is a projected total fluence of 2x10^12 particles/cm^2 of 1 MeV equivalent neutrons of which a fraction of the total neutron fluence, 1x10^11 neutrons/cm^2, has energies above 20 MeV. These values include the standard ATLAS recommended safety factors. The anticipated effects in order of potential seriousness are: (a) single event burnout (SEB) of the input power MOSFET, (b) total dose effects on active CMOS and bipolar components and (c) neutron induced lattice displacements causing conductivity and other changes in active components. The power supply will also be subjected to a magnetic field at a strength of 50 Gauss.

Tests performed on commercially available modules manufactured by Vicor found that none satisfy the requirements. Therefore we are seeking a solution that involves a semi-custom design. This typically introduces questions of reliability and cost. The approach for the development of a prototype is to select a vendor with experience in designing power supplies for radiation environments, e.g. space environments. This provides some assurance that the power supply will be hardened to ionizing and neutron radiation. Then to reduce cost, the radiation hardened power MOSFET will be replaced with a less expensive commercial power MOSFET. It is known that by operating a commercial MOSFET at a lower (derated) voltage it is possible to use them safely in an environment with high-energy particles. For instance a 600 volt MOSFET operated at 500 volts might show a large SEB cross section but has a negligible SEB cross section at 300 volts.

We will report on the progress of the design and comment on the different practical issues of the process such as purchasing of components in lots and specifying parts. Results of power MOSFET qualification will also be presented as well as preliminary test results. Packaging and operational issues will also be discussed as time allows.


CMOS front-end for the MDT sub-detector in the ATLAS Muon Spectrometer, development and performance.

C. Posch*, E. Hazen, J. Oliver:
christoph.posch@cern.ch
hazen@bu.edu
oliver@huhepl.harvard.edu

Abstract

Development and performance of the final 8-channel front-end for the MDT segment of the ATLAS Muon Spectrometer is presented. This last iteration of the read-out ASIC contains all the required functionality and meets the envisaged design specifications. In addition to the basic "amplifier-shaper-discriminator"-architecture, MDT-ASD uses a Wilkinson ADC on each channel for precision charge measurements on the leading fraction of the muon signal. The data will be used for discriminator time-walk correction, thus enhancing spatial resolution of the tracker, and for chamber performance monitoring (gas gain, ageing etc.). The feasibility of the MDT system to perform particle identification through dE/dX measurement using the Wilkinson ADC is evaluated. Results of performance and functionality tests in the lab and on-chamber along with an outlook to volume-production and production testing are presented.

Summary

This article reviews the development of the final 8-channel front-end for the MDT segment of the ATLAS Muon Spectrometer and presents results of performance and functionality tests on the last pre-production prototype. The MDT-ASD is an octal CMOS Amplifier/Shaper/Discriminator which has been designed specifically for the ATLAS MDT chambers. Implementation as an ASIC using a high quality analog 0.5um CMOS process has been chosen for this device. The analog signal chain of the MDT-ASD has already been presented for a previous prototype version of the chip and has not been changed significantly since then. It will therefore be addressed only briefly in this article. New developments include the implementation of a Wilkinson type charge-to-time converter and on-chip programmability of certain functional and analog parameters along with a serial control data interface. Bipolar shaping was chosen to prevent baseline shift at the anticipated level of background hits. The shaper output is fed into a discriminator for the timing measurement and the Wilkinson ADC section for performing the leading edge charge measurement. The information contained in the Wilkinson output pulse, namely the leading edge timing and the pulse width encoded signal charge, will be read and converted to digital data by a TDC. The Wilkinson cell operates under the control of a Gate Generator which consists of all differential logic cells. It is thus highly immune to substrate coupling and can operate in real time without disturbing the analog signals. The final output is then sent to the LVDS cell and converted to external low level differential signals. The main purpose of the Wilkinson ADC is to provide data which can be used for the correction of time-slew effects due to pulse amplitude variations. Time slewing correction improves the spatial resolution of the tracking detector. In addition, this type of charge measurement provides a useful tool for chamber performance diagnostics and monitoring (gas gain, ageing etc.). Further applications such as dE/dx measurements of slow moving heavy particles like heavy muon SUSY partners etc are conceivable. Test results on the conversion characteristics as well as measurements of noise performance respectively non-systematic charge measurement errors of the Wilkinson ADC are shown. The feasibility of the MDT system to perform particle identification through dE/dX measurement using the Wilkinson ADC is evaluated and results of a simulation study on energy separation probability is presented. It was found advantageous to be able to control or tune certain analog and functional parameters of the MDT-ASD, both at power-up/reset and during run time. A serial I/O data interface using a JTAG type protocol plus a number of associated DACs were implemented in the chip. In order to facilitate prototype testing during the design phase as well as to perform system calibration and test runs with the final assembly, a calibration/test pulse injection system was integrated in the chip. It consists of a bank of 8 parallel switched capacitors per channel and an associated channel mask register. The mask register allows for each channel to be selected separately whether or not it will receive test pulses. The capacitors are charged with external standard LVDS voltage pulses, yielding an input signal charge range similar to the expected range of the tube signals. This pulse injection system allows for automated timing and charge conversion calibration of the system. Hence, in principle all systematic errors of the readout electronics can be calibrated out for each individual channel. As a final point, an outlook to volume-production and production testing of the chip is given.


LOW DOSE RATE EFFECTS AND IONIZATION RADIATION TOLERANCE OF THE ATLAS TRACKER FRONT-END ELECTRONICS

M. Ullan*, D. Dorfan*, T. Dubbs*, A. A. Grillo*, E. Spencer*, A. Seiden*,H. Spieler**, M. Gilchriese**, M. Lozano***

*Santa Cruz Institute for Particle Physics (SCIPP)
University of California at Santa Cruz
Santa Cruz, CA 95064, USA
(: 1 831 459 3567
Fax: 1 831 459 5777
E-mail: ullan@scipp.ucsc.edu

**Lawrence Berkeley National Laboratory (LBNL)
University of California at Berkeley
***Centro Nacional de Microelectrónica (CNM-CSIC)
Barcelona, Spain

Abstract

Ionization damage has been investigated in the IC designed for the readout of the detectors in the Semiconductor Tracker (SCT) of the ATLAS experiment at the LHC, the ABCD chip. The technology used in the fabrication has been found to be free from Low Dose Rate Effects which facilitates the studies of the radiation hardness of the chips. Other experiments have been done on individual transistors in order to study the effects of temperature and annealing, and to get quantitative information and a better understanding of these mechanisms. With this information, suitable irradiation experiments have been designed for the chips to obtain a better answer about the survivability of these chips in the real conditions of the ATLAS detector.

Summary

The ABCD chip is the IC that has been designed for the readout of the silicon detectors of the ATLAS Semiconductor Tracker (SCT) at the LHC. It is fabricated on the DMILL technology, which is a BiCMOS process on an SOI substrate. This chip will have to be placed very close to the detectors and, therefore, inside the active area of the SCT. That means that it will have to endure the same level of radiation as the detectors themselves.

Different experiments have been carried out to study the radiation hardness of this technology and the ABCD chip itself to that high level of radiation. The physical mechanisms involved in the damage produced in the analog part of these chips from ionization or non-ionization radiation are different and it is better to study them separately to have a good understanding of the problem. This study is presented here where the effects of ionization radiation have been analyzed for the DMILL technology and the ABCD chip, taking into account the total dose effects.

A first study has been made to check if the bipolar transistors of the DMILL technology suffer from Low Dose Rate Effects (LDRE) which would greatly complicate the rest of radiation hardness studies. Different radiation experiments have been carried out up to a high enough total dose and at a very wide range of dose rates using individual transistors of the technology. The result has demonstrated that the DMILL technology does not suffer from total dose effects which facilitates the realization of other radiation hardness studies. This result has been confirmed by other irradiations up to the total dose of interest. Together with this study, the annealing of the damage produced on the transistors has been investigated in order to separate this effect from the LDRE.

Other experiments have been carried out at different temperatures in order to obtain the sensitivity of the radiation damage of these chips to temperature. The tests structures have been irradiated up to the total dose of interest and at a wide range of temperatures from 10 to 110 ºC. The results demonstrate the presence of two opposite effects. One is the increase in the damage at higher temperatures, and the other is the increase in the annealing of that damage also for higher temperatures. These two effects result in a worst case temperature at which the transistors suffer the largest damage, suffering less damage at lower and higher temperatures.

Finally the ABCD chip has been irradiated up to the total dose of 10Mrads and at a high dose rate in order to obtain the total ionization damage that will be produced in the chip in the real experiment. Given that it has been demonstrated that there are no LDRE with this technology, this experiment can be done at a high dose rate and in a short period of time. The results demonstrate that the ABCD chip remains within specifications after the expected ionization damage has been produced.


Progress in Development of the Analogue Readout Chip for Si Strip Detector Modules for LHC Experiments

E. Chesi, A. Clark, W. Dabrowski, D. Ferrere, J. Kaplon, C. Lacasta, J. Lozano, S. Roe, R. Szczygiel, P. Weilhammer, A. Zsenei.
dabrowsk@ftj.agh.edu.pl

Abstract

We present a new version of 128-channel analogue front-end chip SCT128A for readout of silicon strip detectors. Following the early prototype developed in the DMILL technology we have elaborated a design with the main goal to improve its robustness and radiation hardness. The improvements implemented in the new design are based on experience gained on the DMILL technology while developing the binary readout chip for the ATLAS Semiconductor Tracker. The architecture of the chip and critical design issues will be discussed. The performance of modules built of ATLAS baseline detectors read out by 6 SCT128A chips will be presented and discussed.

Summary

In parallel to development of the binary readout chip for the ATLAS Semiconductor Tracker we have been developing a chip with analogue readout architecture - SCT128A. Both chips have been developed in the DMILL technology and employed the same concept of a fast front-end circuit based on bipolar transistors. Analogue architecture has a number of potential advantages compared to the binary one. A feature, which is particularly important for large installations like trackers for LHC experiments, is immunity of this architecture to common noise effects. First prototype of the SCT128A chip was designed and manufactured in an early stage of stabilisation of the DMILL process. In the meantime the DMILL process has been improved and stabilised. The development of the ABCD binary readout chip helped us to understand better and quantify various aspects of the process like matching, parasitic couplings through the substrate and radiation effects. The conclusions from the work on the ABCD chip have been implemented in the new design of the SCT128A chip with a main goal to improve robustness and radiation hardness of the new chip. The SCT128A is designed to meet all basic requirements of a silicon strip tracker for LHC experiments. It comprises five basic blocks: front-end amplifiers, analogue pipeline (ADB), control logic including derandomizing FIFO, command decoder and output multiplexer. The front-end circuit is a fast transimpedance amplifier followed by an integrator, providing a semi-gaussian shaping with a peaking time of 20-25ns, and an output buffer. The peak values are sampled at 40 MHz rate and stored in the 128-cell deep analogue pipeline. Upon arrival of the trigger the analogue data from the corresponding time slot in the ADB are sampled in the buffer and sent out through the analogue multiplexer. The gain of the front-end amplifier is of about 50mV/fC. The designed peaking time for the nominal values of resistors and capacitors is 20ns. The front-end circuit is designed in such a way that it can be used with either polarity of the input signal, however the full read-out chain (NMOS switches in the analogue pipeline, output multiplexer) is optimised for the p-side strips. The dynamic range of the amplifier is designed for 12fC input which together with the gain of 50mV/fC gives the full swing at the output of the front-end in the range of 600mV. The current in the input transistor is controlled by an internal DAC and can be set within the range from 0 to 320 microA. This allows one to optimise the noise according the actual detector capacitance. The design and the performance of the chip will be presented. The basic chip performance has been evaluated in the test bench. Analogue prototype module consisting of two 6.4 cm x 6.3 cm ATLAS baseline detectors read out by SCT128A chips has been built. The chips are mounted on a ceramic hybrid connected to the sensors in the end-tap configuration. The performance of the module will be presented and discussed.


TIM ( TTC Interface Module ) for ATLAS SCT & PIXEL Read Out Electronics

Jonathan Butterworth, jmb@hep.ucl.ac.uk
Dominic Hayes(*),Dominic.Hayes@ra.gsi.gov.uk
John Lane, jbl@hep.ucl.ac.uk
Martin Postranecky, mp@hep.ucl.ac.uk
Matthew Warren, warren@hep.ucl.ac.uk

University College London, Department of Physics and Astronomy
( * now at Radiocommunications Agency, London )

Martin Postranecky |
( : [00-44]-(0)20-7679 3453
( : [00-44]-(0)20-7679 2000
Fax: [00-44]-(0)20-7679 7145

UNIVERSITY COLLEGE LONDON, DEPT.OF PHYSICS AND ASTRONOMY
High Energy Physics Group
Gower Street, LONDON, WC1E 6BT
E-Mail: mp@hep.ucl.ac.uk
http://www.hep.ucl.ac.uk

Abstract

The design, functionality, description of hardware and firmware and preliminary results of the ROD ( Read Out Driver ) System Tests of the the TIM ( TTC Interface Module ) are described.
The TIM is the standard SCT and PIXEL detector interface module to the ATLAS Level-1 Trigger, using the LHC-standard TTC ( Timing, Trigger and Control ) system.
TIM has been designed and built during the year 2000 and two prototypes have been used since. More modules are being built this year to allow for more tests of the ROD system at different sites around the world.

Summary

The TIM ( TTC Interface Module ) has been designed to provide the interface between the ATLAS Level-1 Trigger and the SCT and PIXEL off-detector electronics.
There will be one TIM module in each of the 9U-sized off-detector ROD ( Read Out Driver ) crates, distributing the timing, trigger and control signals to all the ROD modules in each crate via a custom-designed J3 backplane and BOC ( Back Of Crate ) modules.
Each TIM receives the TTC ( Timing, Trigger and Control ) information in optical form from the LHC-standard TTC distribution system, using the standard TTCrx receiver and decoder chip to provide electrical outputs.
Each TIM receives in turn the ROD BUSY signals from each ROD in the crate and transmits a masked-OR Busy signal to the Level-1 Trigger via a ROD Busy Module.
Two prototype TIM modules have been manufactured last year and have been extensively tested since that time at UCL. One module has been in use in the ROD System Tests at Cambridge from May 2001. Following these tests, further TIM modules are being built to allow for further ROD System and Front End Module testing at various sites around the world.
The TIM has been designed as a 9U multilayer PCB with a standard VME slave interface, with all registers and configuration, control and monitoring accessible to the local crate processor. All the major logic elements of the TIM module are contained on ten large scale PLDs ( Programmable Logic Devices ), allowing for possible future design changes by firmware modification.
Each TIM is also capable of fully stand-alone operation, generating all the TTC-type signals under the control of the local processor. Each TIM can also act as a 'master' to synchronise a number of 'slave' TIM modules to allow for a stand-alone operation of a system consisting of more than one ROD crate.
As well as being the 'standard' TTC Interface Module for the SCT off-detector electronics, TIMs will also be provided to the PIXEL detector community, and possibly other detectors, to provide them with the TTC interface.


Development of a DMILL radhard multiplexer for the ATLAS Glink optical link and radiation test with a custom Bit ERror Tester.

Daniel Dzahini, for the ATLAS Liquid Argon Collaboration
Institut des Sciences Nucléaires
53 avenue des Martyrs,
38026 Grenoble Cedex France
dzahini@isn.in2p3.fr

Abstract

A high speed digital optical data link has been developed for the front-end readout of the ATLAS electromagnetic calorimeter. It is based on a commercial serialiser commonly known as Glink, and a vertical cavity surface emitting laser. To be compatible with the data interface requirements, the Glink must be coupled to a radhard multiplexer that has been designed in DMILL technology to reduce the impact of neutron and gamma radiation on the link performance. This multiplexer features a very sever timing constraints related both to the Front-End Board output data and the Glink control and input signals. The full link has been successfully neutron radiation tested by means of a custom Bit ERror Tester.

Summary

The Liquid Argon Calorimeter of the ATLAS experiment at the LHC is a highly segmented particle detector with approximately 200 000 channels. The signals are digitized on the front-end board and then transmitted to data acquisition electronics situated 100m to 200m away. The front-end electronics has a high degree of multiplexing allowing the calorimeter to be read out over 1600 links each transmitting 32 bits of data at the bunch crossing frequency of 40.08 Mhz. The radiation hardness is a major consideration in the design of the link, since the emitter side will be exposed to an integrated fluence of 3*1014 n (1 MeV Si) per cm2 of the components' surface over 10 years of the LHC running.

The demonstrator link is based on an Agilent Technologies HDMP1022/1024 serialiser/deserialiser. This Glink is used in a double frame mode: the incoming 32 bit digitized data at 40.08 Mhz are multiplexed into 16 bits at 80.16 Mhz. A multiplexer ASIC has been developed in the DMILL technology. This MUX chip translates first the data from LVDS level to CMOS level then it performs the data registration and multiplexing. The 16 bit data words are loaded at the input of the HDMP1022 serialiser. The Glink chip set adds a 4 bit control field to each 16 bit data segment which results in a total link data rate of 1.6 Gb/s.

The Glink serialiser outputs drive a VCSEL that transforms the electrical signal into light pulses transmitted over a Graded Index (GRIN) 50/125 mm multimode fibre to a PIN diode located on the receiver board. For the link described in this document the VCSEL and the PIN diode are packaged together with driving and discriminating circuits as transceiver modules manufactured by Methode. The PIN diode output signals are deserialised by the GLINK HDMP1024 chip. A Programmable Logic Array (ALTERA EMP7128) is placed on the receiver board to facilitate demultiplexing. Several link sender boards were exposed to neutron flux to assess the radiation tolerance of the DMILL MUX, the Glink serialiser and the Methode transceiver. During the radiation tests, the behaviour of the link was monitored on-line. A Bit ERror Tester (BERT) coupled to a pseudo-random pattern generator was specially developed. The BERT consists in a modular tester it permits several high speed (32 bit at 40.08 Mhz) data links to be tested simultaneously. The single bit error detection is performed by comparison of sent and transmitted bit. The BERT comprises EPLD based boards plugged in a VME crate. The slow control and the error acquisition is done on-line by a Personal Computer. The radiation tolerance of the sender part of the link has been demonstrated under neutron radiation up to 1014 n cm-2. Transient data transmission errors (Single Event Upset) were observed by means of the BERT set-up but it has been shown that the contribution of the DMILL MUX to this error rate is very negligible.


Vertical Slice of the ATLAS Control System

Authors:

H.J.Burckhart, J.Cook, B. Hallgren, F.Varela, CERN-EP
Henk Boterenbrood, NIKHEF, Amsterdam, The Netherlands
Viatcheslav Filimonov, PNPI, St.Petersburg, Russia

Dr. Helfried Burckhart
European Laboratory for Particle Physics
CERN, EP
CH-1211 Geneva 23
( : (+41) 22 767 12 54
Fax: (+41) 22 767 83 50

Abstract:

The ATLAS Detector Control System (DCS) consists of two main components:

A distributed supervisor system, running on PCs, and the different Front-end systems. For the former the commercial SCADA package, PVSS-II, has been chosen together with the CERN Joint Controls Project, JCOP.

For the latter, a general purpose I/O concentrator called the "Embedded Local Monitor Board" (ELMB) has been developed, which is based on the CAN fieldbus. The paper describes a full vertical slice of the DCS, including the interplay between the ELMB and PVSS-II. Examples of typical control applications will be given. 

Summary:

The Detector Control System (DCS) must enable a coherent and safe operation of the ATLAS detector. It has also to provide communication with the LHC accelerator and with external services, like cooling, ventilation, electricity distribution and safety systems. Although the DCS will operate independently from the DAQ system, efficient communication between both systems must be ensured. ATLAS consists of several subdetectors that are operationally quite independent. DCS must be able to operate them in both stand-alone mode and in an integrated fashion as a homogenous experiment.

The DCS consists of two main components: the Supervisory Control And Data Acquisition (SCADA) and the Front-End (FE) systems. They will be installed in three distinct locations. The SCADA will be used in the surface control room for overall operation and in the underground electronics rooms for equipment supervision. The commercial package PVSS has been chosen as the SCADA system for the four LHC experiments in the frame of the Joint Controls Project (JCOP) at CERN. It gathers the information from the FE equipment and offers supervisory control functions such as data processing, alert handling, trending and archiving and allows for the development of applications that can be distributed over a network. This distribution facilitates the mapping of the control system onto the different subdetectors. The FE systems are the responsibility of each subdetector and they range from simple sensors and actuators up to complex computer-based devices. They will mainly reside in the experimental cavern. This imposes specific requirements such as operation in a magnetic field of 1.5 Tesla and radiation tolerance. The I/O points are distributed over the whole volume of the detector with distances of the order of 100 meters. The CAN fieldbus has been chosen for the data transmission medium due to these environmental and physical constraints. In order to standardize the FE system where possible, a general-purpose, low-cost I/O concentrator, the Embedded Local Monitor Board (ELMB), has been developed. The ELMB implements the industry standard interface CANopen, it can be embedded into the subdetector’s electronics and provides several I/O functions. More details on the ELMB and results on radiation testing are presented in another contribution to this workshop.

 

This paper presents the implementation of a full vertical slice of the ATLAS DCS comprising the components described above. The ELMB has been interfaced to PVSS by means of the industry standard OPC protocol. The complete readout chain will be described including the functionality of each of the building blocks. A prototype has been developed and used in different control applications for the ATLAS subdetectors, like the cooling systems of the Pixel and TileCal subdetectors. Due to the size of systems in ATLAS, such as the Muon Spectrometer using about 1200 ELMB nodes, the scalability of the fieldbus has been investigated. Recent tests accommodating many ELMB modules on a bus have been performed in order to study node and bus behavior and their management. Results on applicability and performance, presented in this paper, will lead to the design of the overall fieldbus topology in ATLAS. The operation from SCADA and the distribution of functionality over the various building blocks of the readout chain will also be discussed.

 

 

 

 

 


Status Report of the ATLAS SCT Optical Links

Tony Weidberg
Physics Department
Oxford University
Oxford OX1 3RH, UK
( +44 (0) 1865 273370
Fax +44 (0) 1865 273417
t.weidberg1@physics.ox.ac.uk

Abstract

The readout of the ATLAS SCT and Pixel detectors will use optical links. The results of new radiation hardness and lifetime after irradiation for Truelight VCSELs are discussed. Final prototype ATLAS style opto-packages have been integrated into the SCT opto-harnesses and tested using a dedicated test system. These opto-harnesses have been used in the system tests of the SCT forward and barrel detectors. This has enabled different grounding configurations to be assessed. The plans for the production of the opto- harnesses are described.

Summary

Optical links will be used in the ATLAS SCT and Pixel detectors to transmit data from the detector modules to the off-detector electronics and to distribute a subset of the Timing, Trigger and Control (TTC) data from the counting room to the front-end electronics. The links are based on VCSELs and epitaxial silicon PIN diodes operating at a wavelength of 850 nm.

The radiation hardness and lifetime after irradiation has been studied for a sample of 20 Truelight VCSELs. The VCSELs were exposed to a fluence of 2 10**14 p/cm**2 with 30 MeV protons. Assuming that the damage scales with the NIEL value in GaAs this is equivalent to a factor of two greater than the fluence expected during 10 years of ATLAS operation. The VCSELs survived the irradiation and showed rapid annealing. In order to assess the reliability of the devices after irradiation, accelerated aging tests were performed on the irradiated VCSELs. The results of these tests will be described. The final production wafers for the DORIC4A and VDC ASICs used in the SCT optical links have been produced and tested. The Neutron and photon irradiation tests have been performed for samples from these wafers and the results of these tests will be described.

The opto-packages and the opto ASICs for the barrel SCT are mounted on copper/kapton opto-flex circuits. The opto-flex circuits are used to connect to the SCT module and to make the connection to the low mass Al power tapes. Six of the opto-flex/power tape combinations are combined into an opto- harness. The fibres from the pig-tailed opto-packages are ribbonised and fusion spliced into 12 way and 6 way ribbon fibre. The 12 way (6 way) ribbon fibres have MT12 (MT8) connectors at the patch panel end. One opto-harness is used to read out one half row of SCT modules. Several of these opto- harnesses have been assembled and the procedure used is described. For the forward SCT the DORIC4A and VDC ASICs are mounted on the module hybrid and there is a connector for a plug-in opto-package. The fibres from several (usually six) of these opto-packages are combined into one opto- harness in a similar way to the barrel harness. The results of detailed system tests of the barrel and forward opto-harnesses will be described.

These prototype opto-harnesses have been used in the SCT system test at CERN to assess the performance of SCT modules in a realistic configuration to simulate operation in ATLAS. The results have shown that there is no significant increase in noise, compared to the operation of the SCT modules on individual electrical test stands. Different grounding schemes have been studied. In order to maintain the maximum flexibility during the SCT assembly, the harnesses are being designed so that two different grounding schemes can be implemented.

The plans for the production of the harnesses and the detailed acceptance tests that will be performed are described.


Beamtests of Prototype ATLAS SCT Modules at CERN H8 in 2000

ATLAS SCT Collaboration

Corresponding author, Zdenek Dolezal
Zdenek.Dolezal@mff.cuni.cz

Abstract

ATLAS Semiconductor Tracker (SCT) prototype modules equipped with ABCD2T chips were tested with 180 GeV pion beams at CERN SPS. Binary readout method is used so many threshold scans at a variety of incidence angles, magnetic field levels and detector bias voltages were taken. Results of analysis showing module efficiencies, noise occupancies, cluster sizes and magnetic field effects will be presented. Several modules have been built using detectors irradiated to the full ATLAS dose of 3x10E14
p/cm**2 and one module was irradiated as a complete module. Effects of irradiation on the detector and ASIC performance will be shown.

Summary

Two types of silicon microstrip modules, barrel and forward, have been tested with the pion beams of 180 GeV/c at the CERN H8 SPS beamline. The barrel modules were equipped with square silicon microstrip sensors of a physical size of 64 mm long and 63.6 mm wide with strips in parallel at a pitch of 80 micrometers. A module had a pair of sensors glued on the top and the other glued on the bottom side of a baseboard of the module, being angled at 40 mrad to have a stereo view. The strip length of a module was 12 cm by connecting the pair of sensors. The forward modules had a similar strip length but were wedge-shaped with a fan geometry of strips with an average strip pitch of about 80 micrometers. Strips were connected to the readout electronics, near the middle of the strips in the barrel module and at the end of the strips in the forward modules.

A module was equipped with 12 readout chips (prototype ABCD2T), 6 on the top and 6 on the bottom side of the module. Chips were glued on specially-designed hybrids based on polyimide supported by carbon substrate.

Several modules have been built using detectors irradiated to the full ATLAS dose of 3x10E14 p/cm**2 with 24 GeV protons at the CERN proton synchrotron and one module was irradiated as a complete module.

The ABCD chip utilises on-chip discrimination of the signal pulses at each silicon detector strip, producing a binary output packet. For this reason, threshold scans (at 12 threshold values) were carried out, with different module parameters or environmental conditions. A total of over a 1000 runs

of 5000 events each were taken at 5 incidence angles, 2 magnetic field levels and 6 detector bias voltages. These data are complemented by noise runs (taken in situ, but with no beam) and local calibration runs. The readout was triggered with an external scintillator system while simultaneously measuring the particle track using 3 telescopes and time of the beam trigger relative to the 40 MHz system sampling clock.

In the course of data analysis, binary hits in the module channels were classified to ‘efficient hits’ and ‘noise hits’ according to their proximity to the extrapolated track position and timing. Bad channels known from lab and in situ calibrations were excluded from the analysis.

Detection efficiency and noise occupancy were then calculated. From their dependence on module parameters further characteristics were determined, as median charge, ballistic deficit, Lorentz angle, spatial resolution, pulse shapes, etc.


THE ATLAS READ OUT DATA FLOW CONTROL MODULE AND THE TTC VME INTERFACE PRODUCTION STATUS.

Per Gällnö, CERN, Geneva, Switzerland
EP/ATE
Cellular: +41 (0)79 4527065
Fax +41 (0)22 7679495
( +41 (0)22 7672404
email: per.gallno@cern.ch

Abstract

The ATLAS detector data flow from the Front End to the Read Out Buffers (ROD) has to be controlled in order to avoid that the ROD data buffers get filled up and hence data getting lost. This is achieved using a throttling mechanism for slowing down the Central Trigger Processor (CTP) Level One Accept rate. The information about the state of the data buffers from hundreds of ROD modules are gathered in daisy-chained fan-in ROD-BUSY modules to produce a single Busy signal to the CTP. The features and the design of the ROD-BUSY module will be described in this paper.

The RD-12 TTC system VMEbus interface, TTCvi, will be produced and maintained by an external electronics manufacturer and will then be made available to the users from the CERN Electronics Pool. The status of this project is given.


The Sector Logic demonstrator of the Level-1 Muon Barrel Trigger of the ATLAS Experiment

Authors :
V. Bocci, A. Di Mattia, E. Petrolo, R. Vari, A. Salamon, S. Veneziano
INFN Rome and Universita`
degli Studi di Roma "La Sapienza"
(+39-06-49914223
Fax +39-06-49914320
Andrea.Salamon@roma1.infn.it

Abstract

The Atlas Barrel Level-1 muon trigger processes hit information from the RPC detector, identifying candidate muon tracks and assigning them to a programmable pt range and to a unique bunch crossing number.

The on-detector electronics reduces the information from about 350k channels to about 400 32-bit data words sent via optical fiber to the so-called Sector Logic boards.

Each Sector Logic board covers a region Dh x Df = 1.0 x 0.2, it receives the input from up to eight fibers and from thirty-two TileCal trigger towers. The output of the SL board is sent to the Muon Central Trigger Processor Interface (MUCTPI).

Each SL board selects the muons with the two highest thresholds in a sector and associates each muon to a Region of Interest of Dh x Df = 0.1 x 0.1. It also solves RPC chamber overlaps inside the sector and flags all the muons overlapping with a neighboring sector, and it performs the coincidence with the Tile Calorimeter.

In order to keep the full LVL1 system latency below 2 us, the Sector Logic has to perform its functions in five bunch crossing periods.

The design and performance of the Sector logic demonstrator, based on commercial and custom modules and firmware is presented, together with the design of the final VME Sector Logic board.

CONCLUSIONS

The Sector Logic demonstrator design is based on the Multi Function Computing Core (MFCC) 8441 from CES. The MFCC 8441 is a PCI Mezzanine Card (PMC) which is hosted by the VME board RIO2 from CES and is composed of the following parts, which share a common PPC bus: a PCI-bridge interfacing the PMC card with the VME host, a Power PC microprocessor with an SDRAM system memory, a user programmable Front-End FPGA which is connected to a FrontEnd Connector and to the VME backplane.

The FE FPGA contains both the Sector Logic code and the PPC interface. A set of registers and shadow memories were included in the Sector Logic to add flexibility and to test the design.

A custom FE Adaptor Card is used to connect the Sector Logic demonstrator with the Muon Central Trigger Processor Interface (MUCTPI), via a 32-bit LVDS link runnig at 40 MHz.

Various kind of tests were performed to validate the design. Functionality tests were based on data samples from the Atlas standard simulation package. Error rate tests were done by processing input patterns stored on-board. Integration tests with the MUCTPI demonstrator were done to test the connection.

The performance of the Sector Logic demonstrator including the PPC interface and the configuration registers are adequate for the 40 MHz operation and maximum latency of 125 ns.

All the FE FPGA test software were written in C language using the ATLAS DAQ-1 libraries. The test software also includes a high-level behavioral model of the Sector Logic, used to check on-line the functionality of the circuit.

This work has proven that the use of commercial hardware is a valid solution during the first part of the development of custom boards, because it reduces the demonstrator development time and gives the designer good support during the test phase.

The first Sector Logic VME board prototype is currently been designed on the basis of the present demonstrator.


Power Supply and Power Distribution System for the ATLAS Silicon Strip Detectors

Piotr MALECKI,
Institute of Nuclear Physics,
ATLAS Experiment Lab.
30-055 Krakow, ul Kawiory 26A
( :(48 12) 633 33 66
Fax: (48 12) 633 38 84
malecki@chall.ifj.edu.pl

Abstract

The Silicon Strip Detector of the ATLAS experiment has modular structure. The granularity of its power supply system follows the granularity of the detector. This system of 4088 multi-voltage channels providing power and control signals for the readout electronics as well as bias voltage for silicon detectors is described. Problems and constraints of the power distribution lines are also presented. In particular, optimal choice between concurrent requirements on material, maximum voltage drop, space available for services, technological constraints and cost are discussed"

Summary

The multi-voltage power supply system of the ATLAS SCT provides high current (of the order of 1 A) low voltages for analog and digital parts of the module readout chips as well as a number of low current voltages and control signals for the optical data transmission and clock distribution circuits. Low voltage power supply modules are associated with the high voltage modules which provide bias voltage (up to 500 V) for silicon detectors. Integration of low and high voltage power supply modules is on the level of a common crate equipped with a custom backplane, custom inter-module communication protocol, a common crate controller and a common crate bulk supply. One crate consists of 48 independent, fully isolated power supply channels. Common LV/HV power supply crates communicate with higher level of the Detector Control System (DSC) via the CAN bus protocol.

Power supply modules are located outside of the ATLAS detector. Every SCT module is serviced by a multiwire cable/tape which consists of two pairs of high current lines, two pairs of the corresponding sense wires and a number of low current lines.

This power transmission path is divided on three parts.

The first part running from detector modules through the innermost part of

the detector is made of thin aluminum-Kapton tapes, called low_low_mass tapes. Similar material constraints resulted in the choice of low_mass tapes for the second part of the power transmission lines. Conventional, but custom design copper multiwire cables are applied in the third part of that path.

Many details of the design of the power supply units are closely related to parameters of the power transmission lines. The optimal selection of these parameters on the other hand is the subject for dificult compromises. In this paper we summarize the main requirements, specifications for the SCT power supply and transmission system, present design concepts of both, low and high voltage power modules and concentrate on integration solutions. An optimization process for the selection of the tranmission line parameters and its feed back on the power supply system design is also discussed.


The Final Multi-Chip Module of the ATLAS Level-1 Calorimeter Trigger Pre-processor

G. Anagnostou, P. Bright-Thomas, J. Garvey, S. Hillier, G. Mahout, R. Staley, W. Stokes, S. Talbot, P. Watkins, A. Watson University of Birmingham, Birmingham, UK
R. Achenbach, P. Hanke, W. Hinderer, D. Kaiser, E.-E. Kluge, K. Meier, U. Pfeiffer, K. Schmitt, C. Schumacher, B. Stelzer University of Heidelberg, Heidelberg, Germany
B. Bauss, K. Jakobs, C. Noeding, U. Schaefer, J. Thomas University of Mainz, Mainz, Germany
E. Eisenhandler, M.P.J. Landon, D. Mills, E. MoyseQueen Mary, University of London, London, UK
P. Apostologlou, B.M. Barnett, I.P. Brawn, J. Edwards, C.N.P. Gee, A.R. Gillman, R. Hatley, V.J.O. Perera, A.A. Shah, T.P. Shah Rutherford Appleton Laboratory, Chilton, Didcot, UK
C. Bohm, M. Engstrom, S. Hellman, S.B. Silverstein University of Stockholm, Stockholm, Sweden
Presented by Werner Hinderer (hinderer@kip.uni-heidelberg.de)

Abstract

The final Pre-processor Multi-Chip Module (PPrMCM) of the ATLAS Level-1 Calorimeter Trigger is presented. It consists of a four-layer substrate with plasma-etched vias carrying nine dies from different manufacturers. The task of the system is to receive and digitise analog input signals from individual trigger towers, to perform complex digital signal processing in terms of time and amplitude, and to produce two independent output data streams. A real-time stream feeds the subsequent trigger processors for recognising trigger signals, and the other provides a deadtime-free readout of the Pre-processor information for the events accepted by the entire ATLAS trigger system. The PPrMCM development has recently been finalised after including substantial experience gained with a demonstrator MCM.

Summary

This paper describes the final version of the ATLAS Pre-processor Multi-Chip Module (PPrMCM). Considerable experience has been gained from a demonstrator version previously presented at this workshop series.In the ATLAS Level-1 Calorimeter Trigger, the ATLAS PPrMCM combines pre-processing and readout for four trigger-tower signals on a single substrate. The electrical boundaries of the PPrMCM package were placed at locations in the processing chain where a minimum number of signals enter and leave of the package. The MCM features analog input and digital output, and therefore houses both mixed-signal and purely digital chips. Some of them are commercially available and others are application specific. A Pre-processor ASIC (PPrAsic) developed at the ASIC laboratory of the University of Heidelberg forms the heart of the system and carries out digital processing of four trigger towers. In total the PPrMCM contains nine dies: four FADCs, one Pre-processor ASIC, three LVDS serialisers for the digital data transmission to the subsequent processors, and a timer chip required for the phase adjustment of the FADC strobes with respect to the analog input signals.The tasks of the PPrMCM are:* To digitise four analog trigger-tower signals at 40 MHz with 10-bit resolution. Digitisation at 12-bits is used to extend the effective number of bits.* To process digital trigger-tower data in terms of energy calibration and bunch-crossing timing identification.* To serialize processed trigger tower data using high-speed Bus LVDS chip-sets.* To provide deadtime-free readout of the data from four trigger towers.In order to achieve these, the MCM consists of:* Four 12-bit FADCs manufactured by Analog Devices (AD9042).* One four-channel PPrAsic, providing readout and pre-processing;* One timer chip (Phos4) for the phase adjustment of the FADC strobes with respect to the analog input signals.* Three Bus LVDS Serialisers, 10-bits at 40 MHz (400 Mbps user data rate, 480 MBd including start- and stop-bit).The physical substrate of the PPrMCM is a combination of three flexible Polyimid foils, laminated onto a rigid copper substrate to form four routing layers. Plasma etching is used for so-called buried via connections to adjacent layers, and routing structures are formed in copper using conventional etching techniques. The surface of the top layer is gold-plated to permit safe bonding of aluminium wires. The technology described is implemented in the TwinFlex MCM-L technology provided by the company Wuerth (Germany).Detailed simulations of electrical, thermal and timing properties of the PPrMCM have been carried out. The layout of the substrate has been finalised. The production of a pre-series consisting of 10 PPrMCMs is expected for the autumn of 2001.


Prototype Readout Module for the ATLAS Level-1 Calorimeter Trigger Processors

G. Anagnostou, P. Bright-Thomas, J. Garvey, S. Hillier, G. Mahout, R. Staley, W. Stokes, S. Talbot, P. Watkins, A. Watson University of Birmingham, Birmingham, UK
R. Achenbach, P. Hanke, W. Hinderer, D. Kaiser, E.-E. Kluge, K. Meier, U. Pfeiffer, K. Schmitt, C. Schumacher, B. Stelzer University of Heidelberg, Heidelberg, Germany
B. Bauss, K. Jakobs, C. Noeding, U. Schaefer, J. Thoma University of Mainz, Mainz, Germany
E. Eisenhandler, M.P.J. Landon, D. Mills, E. MoyseQueen Mary, University of London, London, UK
P. Apostologlou, B.M. Barnett, I.P. Brawn, J. Edwards, C.N.P. Gee, A.R. Gillman, R. Hatley, V.J.O. Perera, A.A. Shah, T.P. Shah Rutherford Appleton Laboratory, Chilton, Didcot, UK
C. Bohm, M. Engstrom, S. Hellman, S.B. Silverstein University of Stockholm, Stockholm, Sweden
Corresponding author: Viraj Perera (viraj.perera@rl.ac.uk)

Abstract

The level-1 calorimeter trigger consists of three subsystems, namely the Preprocessor, electron/photon and tau/hadron Cluster Processor (CP), and Jet/Energy-sum Processor (JEP). The CP and JEP will receive digitised calorimeter trigger-tower data from the Preprocessor and will provide trigger multiplicity information to the Central Trigger Processor and region-of-interest (RoI) information for the level-2 trigger. It will also provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purposes. This paper will outline a readout system based on FPGA technology, providing a common solution for both DAQ readout and RoI readout for the CP and the JEP.

Summary

The ATLAS level-1 Calorimeter Trigger consists of three subsystems, namely the Preprocessor, electron/photon and tau/hadron Cluster Processor (CP), and Jet/Energy- sum Processor (JEP). The CP and JEP will receive digitised calorimeter trigger-tower data from the Preprocessor, and will provide trigger multiplicity information to the Central Trigger Processor via Common Merger Modules (CMMs; accompanying paper). Using Readout Driver (ROD) modules, the CP and JEP will also provide region-of-interest (RoI) information for the level-2 trigger, and intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purposes.The ROD module for both the Cluster Processor and the Jet/Energy-sum Processor is based on FPGA technology. We have designed these modules to be common to both subsystems, using appropriate firmware to handle several different types of data: RoIs, and DAQ data for both the CP and the JEP.The collection of both DAQ and RoI data starts at the processor FPGAs on the processor modules, where for every LHC bunch-crossing data are captured in dual-port RAMs. They are transferred from these RAMs to FIFOs, following a level-1 accept signal received from the Central Trigger Processor via the Timing Control Module. Dual- port RAMs and FIFOs are implemented on the FPGAs. Data from up to 20 of these FPGAs on a processor module are merged onto a single high-speed serial link (HP G- link).The prototype ROD module receives data from four processor modules. It processes and stores the data (with zero suppression if required) in FIFO buffers, formats the data to ATLAS DAQ fragments, and transmits them to DAQ and to the level-2 trigger via S-links at the level-1 accept rate. The data that are sent on the S-links can be spied on for monitoring, and are available on dual-port memories to be read out to a single-board computer via VME for analysis. If more processing power is required, a PCI mezzanine card (PMC) processor can be plugged onto the module.The prototype ROD is implemented as a triple-width 6U VME module with four common mezzanine card (CMC) positions (two either side): one G-link receiver CMC card interfacing to four processor modules, two S-link positions for DAQ and RoIs, and one position for a commercial PMC co-processor card. It also hosts a TTC receiver card with a CERN TTCrx chip to supply the 40 MHz clock, level-1 accept, and other signals such as bunch-crossing number, event number, trigger type, etc.Firmware for CP readout to DAQ and RoI readout to the level-2 trigger has been developed and tested, and initial integration tests have been carried out with the RoI builder (ROIB) and the readout subsystem (ROS). The experience gained from this prototype module will benefit the design of the final 9U production ROD module.


One Size Fits All: Multiple Uses of Common Modules in the ATLAS Level-1 Calorimeter Trigger

G. Anagnostou, P. Bright-Thomas, J. Garvey, S. Hillier, G. Mahout, R. Staley, W. Stokes, S. Talbot, P. Watkins, A. Watson   
University of Birmingham, Birmingham, UKR.
Achenbach, P. Hanke, W. Hinderer, D. Kaiser, E.-E. Kluge, K. Meier, U. Pfeiffer, K. Schmitt, C. Schumacher, B. Stelzer
University of Heidelberg, Heidelberg, Germany
B. Bauss, K. Jakobs, C. Noeding, U. Schaefer, J. ThomasUniversity of Mainz, Mainz, GermanyE. Eisenhandler, M.P.J. Landon, D. Mills, E. MoyseQueen Mary, University of London, London, UK
P. Apostologlou, B.M. Barnett, I.P. Brawn, J. Edwards, C.N.P. Gee, A.R. Gillman, R. Hatley, K. Jayananda, V.J.O. Perera, A.A. Shah, T.P. Shah
Rutherford Appleton Laboratory, Chilton, Didcot, UK
C. Bohm, M. Engstrom, S. Hellman, S.B. Silverstein    University of Stockholm, Stockholm, Sweden
Corresponding author: Eric Eisenhandler (e.eisenhandler@qmw.ac.uk)

Abstract

The architecture of the ATLAS Level-1 Calorimeter Trigger has been improved and simplified by using a common module to perform different functions that originally required three separate modules. The key is the use of FPGAs with multiple configurations, and the adoption by different subsystems of a common high-density custom crate backplane that takes care to make data paths equal widths and includes minimal VMEbus. One module design can now be configured to count electron/photon and tau/hadron clusters, or count jets, or form missing and total transverse-energy sums and compare them to thresholds. In addition, operations are carried out at both crate and system levels by the same module design.

Summary

The ATLAS Level-1 Calorimeter Trigger executes trigger algorithms in two parallel subsystems: Cluster Processor (CP) and Jet/Energy-sum Processor (JEP). Cluster Processor Modules identify electron/photon and tau/hadron clusters, sending the numbers found to merger modules that sum cluster multiplicities for 16 thresholds, first by crate and then for the four-crate subsystem. In the original design these were Cluster Merger Modules, fed by cables to a separate crate. Jet/Energy Modules (JEM) identify jets, and also sum transverse energy and its components over small regions. The numbers of jets found are sent to merger modules that sum jet multiplicities for eight thresholds, first by crate and then for the two- crate subsystem. In the original design this was done by Jet Merger Modules in each crate, fed via the backplane. In parallel, transverse-energy sums were formed by Sum Merger Modules in each crate, also fed via the backplane, followed by subsystem summing and comparison of total and missing transverse energy with sets of thresholds.The functionality of Cluster and Jet Merger Modules was very similar, so first those two designs were unified. A simulation showed that data signals could be transmitted over the full backplane width at 40 MHz single-ended (mandatory due to pin-counts), so the same in-crate layout could be adopted for both the CP and the JEP. It was then shown that the energy merging could be done by the same Common Merger Module (CMM) since the 36-bit wide JEM transverse-energy information could be compressed to 24 bits without significant effect on trigger performance. The FPGA code for summing multiplicities, or for computing total and missing transverse energy, could also run in the same FPGAs. The final rationalisation was to adopt a common high-density custom backplane for both processors. Although this required careful module design, it has advantages for the trigger in addition to simplifying it.There are two CMMs for counting hits or adding transverse energy in each crate. Which operations they carry out is determined automatically by crate and slot occupied. To keep to one design, all modules have facilities for carrying out the final subsystem-wide merging, even though only four of the 12 CMMs are needed for this function.Pins on the common backplane are at a premium (820 pins/module, 5 rows at 2 mm pitch), and full VMEbus cannot be accommodated. Therefore a minimal set of VME lines is used. Inter-module fan-in/fan-out and input data to the CMMs occupy most of the pins, while timing signals and a CANbus for monitoring voltages and temperatures are also present.This backplane and CMM arrangement has allowed addition of new trigger algorithms, namely: forward jets, approximate total transverse energy in jets, and total transverse energy exceeding local thresholds. The programmability of the logic allows other variations to be added later.In addition, two other modules perform multiple roles. A common Readout Driver (accompanying paper) handles both readout data and level-1 trigger regions-of-interest in both CP and JEP, and a common Timing Control Module will service the CP, JEP, and also the Preprocessor subsystem.


Radiation-hard ASICs for optical data transmission in the ATLAS Pixel detector

Authors:
K.E. Arms, K.K. Gan, M. Johnson, H. Kagan, R. Kass, C. Rush, S. Smith and M. Zoeller
Department of Physics, The Ohio State University,
Columbus, Ohio 43210, USA

J. Hausmann, M. Holder, M. Kraemer, A. Niculae and M. Ziolkowski *
Fachbereich Physik, University of Siegen,
57068 Siegen, Germany

*corresponding author: e-mail michal.ziolkowski@cern.ch

Abstract

The aim of our work is to design radiation-hard CMOS electronics for optical data transmission in the ATLAS Pixel detector. Two ASICs are under development: a VCSEL driver chip for 80 Mb/s data transmision from the detector and a Bi-Phase Mark decoder chip to recover control data and 40 MHz clock received optically by a PIN diode on the detector side. Both ASICs are implemented in radiation-hard 0.8um DMILL technology. Samples of chips were irradiated recently with 25 GeV protons up to the total dose of 55 Mrad and the conclusive results are expected in the Summer of 2001.

Summary

Originally the optical driver and Bi-Phase Mark decoder ASICs have been designed by SemiConductor Tracker community and have been implemented in AMS 0.8 um npn Bi-Polar process. In order to satisfy the needs of the Pixel community we have re-designed both circuits and fabricated them in DMILL radiation-hard CMOS technology providing low power dissipation and enabling assembling flexibility. First encouraging results are now available. Most of the ASICs can survive up to an irradiation dose of 55 Mrad in the initial irradiation trial. As expected, unfavorable effects of irradiation in the decoder chip are compensated by increased supply voltage: from 3.2 V initially - up to 5.0 V after total exposure. Detailed comparison of the decoder chip characteristics before and after irradiation will be carried out by mid of May,~2001, following post-irradiation samples release. In particularly low input current threshold found before exposure will be re-examine. Also the VCSEL driver chip has sustained its good performance during irradiation trial. Minor bright current drop after irradiation is easily correctable by means of tuned bias current whereas the observed increase of dim current is a favorable change, since irradiated VCSELs show higher bias threshold. Third DMILL iteration of Bi-Phase Mark decoder chip is now in preparation. The performance at low input current will be equalized by including a feedback circuit for voltage offset correction. Three independent decoder channels will be arranged on single chip as required by array-like assembling plan. New DMILL samples will be under tests in early October 2001. In addition to DMILL implementation, we have recently designed both ASICs in a deep submicron 0.25um technology with expectation to minimize power dissipation and to achieve very good inherent radiation tolerance. First samples will be evaluated in June 2001.


Design and Test of a DMILL Module Controller Chip for the Atlas Pixel Detector

Roberto Beccherle
INFN - Sez. di Genova
Via Dodecaneso, 33
I-16146 GENOVA
( +39 10 353-6485
Fax +39 10 353-6319
Roberto.Beccherle@ge.infn.it

Abstract

The main building block of the Atlas Pixel Detector is a "module" made by a Silicon Detector bump-bonded to 16 Analog Front-End chips. All FE's are connected by a star topology to the MCC. MCC does system configuration, event building, control and timing distribution. The electronics has to tolerate radiation fluences up to 10^15 cm^-2 1Mev in equivalent neutrons during the first three years of operation. The talk describes the first implementation of the MCC in DMILL (a .8um Rad-Hard technology). Results on tested dices and irradiation results of this devices at the CERN PS, up to 30 MRad, will be presented. The chip was operating during irradiation and allowed to measure SEU effects.

Summary

The Module Controller Chip (MCC) is an ASIC which provides complete control of the Atlas Pixel Detector module. Besides the MCC the module hosts 16 FE chips bump-bonded to a Silicon Detector.

The talk is divided in three sections.

In the first section we describe the requirements that the MCC has to fulfil. Main features of this device are the ability to perform event building which provides some data compression on data coming from 16 Front-End chips read out in parallel. The system clock frequency is 40MHz. Inside the MCC 16 Full Custom FIFO's temporary store data received by the FE chips. Event Building is performed by extraction of hits from those FIFO's and formatting the event in one or two serial streams that allow a data transfer up to 160 Mbit/s. All the operations on the module (configuration of MCC and FE's, trigger and resets) are performed by means of a serial protocol which is decoded inside the MCC. The Trigger command decoding is done allowing for a single bit flip on the data line without loss of timing information.

First a prototype and then a full version of the chip where designed and tested. This is described in the second section of the talk. The prototype chip, called MCC-D0 is made of a Full Custom FIFO, the whole Command Decoder and an array of configuration registers. The second chip is a full scale MCC (MCC-D2) designed to be integrated in a Rad-Hard version of the module.

The third part describes in detail the tests made on both chips focusing on the irradiation tests done at PS at CERN where 8 MCC-D0's were successfully irradiated up to 30 MRad. The chips were irradiated while operating them. This allowed us to perform a detailed measurement of both static and dynamic Single Event Upset (SEU) effects. We also describe our test system, developed in Genova, which allows a comparison between the actual hardware, hosted on a VME board, and a C++ simulation of the MCC.


An Emulator of Timing, Trigger and Control (TTC) System for the ATLAS Endcap Muon System

Yasuaki Ishida, Chikara Fukunaga, Ken-ichi Tanaka, Naofumi Takahata
(Department of Physics, Tokyo Metropolitan University)
for ATLAS TGC Electronics Group
E-mail:ishida@comp.metro-u.ac.jp <Main>
URL:http://tmubsun.center.metro-u.ac.jp/ishida/

Abstract

We present the development of an emulator of TTC system. This emulator is made using an ASIC and includes functionalities of generation of LHC bunch pattern as well as random trigger, relevant functionalities of TTCvi, TTCvx, and TTCrx in one IC chip. Therefore, a test system environment of detector front-end modules using TTC system can be simplified dramatically. And thanks to the random trigger generation, this emulator can give us a realistic experimental environment for an electronics system. We discuss the function of this emulator and test results of the ASIC.

Summary

We have developed a TTC emulator for the ATLAS Endcap Muon Electronic test setup. This emulator involves generation of the LHC bunch pattern and generation of some signals of TTCvi, TTCvx, and TTCrx on one IC chip.

The signal generation of TTC requires a relatively large-scale setup (two VME modules (TTCvi and TTCvx), TTCrx chip as well as OE/EO converters) even we need a few signals for electronic development and debugging. It is not necessary to use such a complicated system and purchase many modules if we use this emulator, cost can also be held down, and test setup will be simplified.

This emulator is made using ASIC of 0.6mm, and can generate the following TTC signals; Trigger (L1A), Bunch Counter Reset (BCR), Event Counter Reset (ECR), Pre-trigger, Orbit, BC, BCID[11:0], Event ID (EVID[23:0]), Trigger Type, and 40MHz Clock.

Trigger is Level 1 Accept signal, and Pre-trigger announces a Level 1 Accept signal beforehand early by 2.5us. Since it brings close to more nearly actual experiment environment, Trigger is generable at random. However, since completely random signals could not be made, only pseudo random pulse pattern has been producible with this emulator.

BC is generated with the LHC, SPS and PS bunch signals structure. 1 bunch is 25ns, and 72 bunches with 12 missing bunches are 1 PS batch. Bunch Disposition in the LHC, SPS and PS is repeating 3-batch and 4-batch. 3-batch and 4-batch cycles will be interleaved in the form; 334 334 334 333, in order to fill each ring with a total of 2808 bunches. And 3564 bunches (include missing bunches) are 1 Orbit signal. 1 Orbit signal is 88.924us, which is LHC (1-ring) cycle.

BCR and ECR are counter reset signals. BCR is generated automatically when BCID is 0. And BCR and ECR are generated manually when appropriate input signals are high.

Initially, we have implemented the emulator using a Xilinx XCS40PQ208 that is easy to modify. However, when this FPGA was used, we were not able to reproduce 40MHz clock, and BC pattern stably and properly. Therefore, this emulator chip has been rebuilt with an ASIC rather than with another fast FPGA. We have added new signal emulation (Trigger Type, Event ID) for the ASIC.

We implement a TTCrx chip with the test board developed by CERN/MIC group as a mezzanine for the actual setup. If, therefore, the emulator chip is mounted on the same board instead of TTCrx, we can emulate all the TTC signals required for the system.


Activation studies for an ATLAS SCT module

C.Buttar, I.Dawson, A.Moraes
(University of Sheffield, UK)
Ian.Dawson@cern.ch

Abstract

abstract for a poster contribution to the LHC

One of the consequences of the harsh radiation environments at LHC experiments will be induced-activation of detector systems. This has implications for operation and maintenance scenarios. We have simulated the radiation environment of the ATLAS SCT system and made first estimates on the levels of induced activation of an SCT module. This has included studying both neutron-induced and spallation-induced activation. Dose rates are also obtained and compared to other parts of the ATLAS detector where estimates have also been made.


Prototype Slice of the Level-1 Muon Trigger in the Barrel Region of the ATLAS Experiment

V.Bocci, G.Chiodi, S.Di Marco, E.Gennari, E.Petrolo, A.Salamon, R.Vari, S.Veneziano
INFN Roma, Dept. of Physics, Università degli Studi di Roma "La Sapienza"
p.le Aldo Moro 2, 00185 Rome, Italy

Abstract

The ATLAS barrel level-1 muon trigger system is split in an on-detector and an off-detector part. Signals coming from the first two RPC stations are sent on detector to dedicated ASICs mounted on the low-pT Pad boards, that select muon candidates compatible with a programmable pT cut of around 6 GeV/c, and produce an output pattern containing the low-pT trigger results. This information is transferred to the corresponding high-pT Pad boards, that collect the overall result for low-pT and perform the high-pT algorithm using the outer RPC station, selecting candidates above a threshold around 20 GeV/c. The combined information is sent via optical fibre off-detector to the optical receiver boards and then to the Sector Logic boards, that count the muon candidates in a region of Dh ´ Df =1.0´ 0.1 ,and encode the trigger results. The elaborated trigger data is sent to the Central Trigger Processor Muon Interface on dedicated copper link. The read-out data for events accepted by the level-1 trigger are stored on-detector and then sent to Read-Out Drivers via the same receiver boards mentioned before sharing the bandwidth with the trigger data.

A trigger slice is made of the following components: a low-pT board, containing four Coincidence Matrix boards; a high-pT board, containing 4 CM boards, the Pad logic board and the optical link transmitter; an optical link receiver; a Sector Logic board; a Read-Out Driver board. Prototype functionality will be presented.

Summary

The ATLAS barrel level-1 muon trigger system has the following main requirements: coarse measurement and discrimination of the muon transverse momentum pT; bunch crossing identification; fast and coarse tracking to identify tracks in the precision chambers that are related to the muon candidate; 2nd-coordinate measurement with a required resolution of 5–10 mm.

The muon trigger system in the barrel is based on full granularity information coming from three station of a dedicated trigger detector, Resistive Plate Chamber, covering a region of –1<h<1. Two stations are located near the centre of the magnetic field region, inside the air-core toroids, and provide the low-pT trigger (pT > 6 GeV), while the addition of the third station, at the outer radius of the magnet, allows to increase the pT threshold to more than 20 GeV, thus providing the high-pT trigger.

A trigger station is made of two detector layers, each one is composed by two RPC detectors, read out by two orthogonal series of pick-up strips of about 3 cm pitch: the h strips parallel to the MDT wires (z direction) provide the "bending" coordinate of the trigger detector; the f strips, orthogonal to the wires, provide the second "non-bending" coordinate.

To reduce the rate of accidental triggers, due to low-energy background particles in the ATLAS cavern, the algorithm is performed in both the h and f  projections for both low-pT and high-pT triggers. The first stage of the trigger algorithm is performed separately and independently for the two projections. A valid trigger is generated only if the trigger conditions are satisfied for both projections. The trigger logic requires three out of four layers in the middle stations for the low pT trigger and, in addition, one of the two outer layers for the high-pT trigger. The . and f trigger information is combined to generate the Regions-of-Interest (RoI), identifying areas in the apparatus in which track candidates are found with a granularity of ~0.1 ×0.1 in the h-f pivot plane.

The signals from the RPC detector are amplified, discriminated and digitally shaped on-detector. In the low-pT trigger, for each of the h and the f projections, about 200 RPC signals of the two detector doublets, RPC1 and RPC2, are sent to a Coincidence Matrix (CM) board, that contains a CM chip. This chip performs almost all of the functions needed for the trigger algorithm and also for the read-out of the strips. It aligns the timing of the input signals, performs the coincidence and majority operations, and makes the pT cut on three different thresholds. It also contains the level-1 latency pipeline memory and de-randomising buffer. The CM board produces an output pattern containing the low-pT trigger results for each pair of RPC doublets in the hor f projection. The information of two adjacent CM boards in the h projection, and the corresponding information of the two CM boards in the f projection, are combined together in the low-pT Pad Logic (Pad) board. The four low-pT CM boards and the corresponding Pad board are mounted on top of the RPC2 detector. The low-pT Pad board generates the low-pT trigger result and the associated RoI information. This information is transferred, synchronously at 40 MHz, to the corresponding high-pT Pad board, that collects the overall result for low-pT and high-pT. In the high-pT trigger, for each of the h and f projections, the RPC signals from the RPC3 doublet, and the corresponding pattern result of the low-pT trigger, are sent to a CM board, very similar to the one used in the low-pT trigger. This board contains the same coincidence-matrix chip as in the low-pT board, programmed for the high-pT algorithm. The high-pT CM board produces an output pattern containing the high-pT trigger results for a given RPC doublet in the h or f projection. The information of two adjacent CM boards in the h projection and the corresponding information of the two CM boards in the f projection are combined in the high-pT Pad Logic board. The four high-pT CM boards and the corresponding Pad board are mounted on top of the RPC3 detector. The high-pT Pad board combines the low-pT and high-pT trigger results. The combined information is sent, synchronously at 40 MHz, via optical links, to a Sector Logic (SL) board, located in the USA15 counting room. Each SL board receives inputs from up to height Pad boards, combining and encoding the trigger results of one of the 64 sectors into which the barrel trigger system is subdivided. The trigger data elaborated by the Sector Logic is sent, again synchronously at 40 MHz, to the Muon Interface to the Central Trigger Processor (MUCTPI), located in the same counting room. Data are read out from high-pT Pad boards only. These data include the RPC strip pattern and some additional information used in the LVL2 trigger. The read-out data for events accepted by the LVL1 trigger are sent asynchronously to Read-Out Drivers (RODs) located in the USA15 underground counting room and from here to the Read-Out Buffers (ROBs). The data links for the read-out data are independent of the ones used to transfer partial trigger results to the SL boards. Pad, SL and MUCTPI modules generate themselves read-out data on partial trigger results, in order to monitor the system.


Radiation test and application of FPGAs in the Atlas Level 1 Trigger.

V.Bocci(1) , M. Carletti(2), G.Chiodi(1), E. Gennari(1), E.Petrolo(1), A.Salamon(1), S.Veneziano(1)
(1) INFN Roma, Dept. of Physics, Università degli Studi di Roma "La Sapienza"
p.le Aldo Moro 2, 00185 Rome, Italy
(2) INFN Laboratori Nazionai Frascati, Via Enrico Fermi 40, Frascati (Roma)

Abstract

The use of SRAM based FPGA can provide the benefits of re-programmability, in system programming, low cost and fast design cycle.

The single events upset (SEU) in the configuration SRAM due to radiation, change the design's function obliging the use in LHC environment only in the restricted area with low hadrons rate.

Since we expect in the Atlas muon barrel an integrated dose of 1Krad and 1010 hadrons/cm2 in 10 years, it becomes possible to use these devices in the commercial version. SEU errors can be corrected online by reading-back the internal configurations and eventually by fast re-programming.

In the frame of the Atlas Level-1 muon trigger we measured for Xilinx Virtex devices and configuration FlashProm:

With the expected SEU rate calculated for our environment we found a solution to correct online the errors.


An optical link interface for the Atlas Tile-Calorimeter


Daniel Eriksson, Jonas Klereborn, Magnus Ramstedt and Christian Bohm
University of Stockholm, Sweden

Abstract

An optical (1300 nm) link interface has been developed in Stockholm for the Atlas Tile-Calorimeter. The link serves as a readout for one entire TileCal drawer, i.e. with up to 48 front-end channels. It is also contains a receiver for the TTC clocks and messages distributing these to the full digitizer system. Digitized data is serialized in the digitizer boards and supplied with headers and CRC control fields. Data with this protocol is then sent via G-link to an Odin S-link receiver card where it is unpacked and parallelized in a specially developed Altera code. The entire read-out part of the interface has been duplicated for redundancy with two dedicated output fibers. The TTC distribution has also been made redundant by using two receivers (and two input fibers) both capable of distributing the TTC signal. A high pass filter tuned to the frequency of an active TTC-link, decides which receiver to use. To decrease the sensitivity to radiation the complexity of the interface has been kept at a minimum. This is also beneficial to the system cost. To facilitate the mechanically installation the interface has been given an L-shape so that it can be mounted closely on top of one of the digitizer boards without interfering with its components.

Summary

The interface link functions as both data read out and TTC distribution for up to 48 front end channels. The basic design idea of the interface card is move as much logic as possible outside the detector, where it's not exposed to radiation.

Since there is very little room in the middle of the Tile-Calorimeter electronics drawer the interface card has been made as small as possible. To minimize the combined height of the front end electronics plus the interface card, the interface card has been designed almost like a mezzanine card, leaving room for the necessary cables in the drawer interconnection. The card has an L shape to enable the connection of the data input cables after mounting the interface card and still be able to use a regular data output connector. The card use only 3.3V taken directly from the digitizer board beneath via a purpose mounted 8 pin connector.

A passive high pass filter senses which TTC input channel to use. A functioning TTC signal will force an enable signal high. If the TTC signal fails, the frequency will with high probability decrease enabling the other channel. Since it only uses discrete components, the filter is very radiation tolerant.


The present card use the HDMP 1032 G-Link chip from Agilent. This is specified to run up to 1.4 Gbaud, which is less than 32 bit data at 40 MHz. We use presently 20MHz, which is adequate for the needs of the digitizer system.

The high speed lines to the VCSEL cathodes are very sensitive to noise, and must be made as short as possible. Therefore, a reciever card was placed on top of the interface card. This will also improve the iimpedance definition of the VCSEL connections.

To get a PIN-TIA operating at 3.3 V we had to use separate components. Instead of having the TIA integrated with the diode, we used a PIN diode and a separate TIA chip. Since we use lvds repeaters, a complete optic receiver IC is more than we need. A simple PIN-TIA gives an lvds compatible signal, significantly reducing space and cost. The transmitter is based on the MAX3286 laser driver. This is a compact solution well suited to the requirements of the board.

Full bit error rate and radiation tests will be conducted during the summer.

Hopefully, the next version of the interface card will use the G-link clone GOL chip developed by CERN Microelectronics Group. We will also try to find a 3.3 V PIN-TIA


Tests and Production of the ATLAS Tile Calorimeter Digitizer


Jonas Klereborn, Magnus Ramstedt, Svante Berglund, Christian Bohm,
Kerstin Jon-And, Sam Silverstein.
Stockholm University

Abstract

After a successful pre-production series the full scale production of the Tile-Cal digitizer will begin during the summer of 2001. To be able to ensure functionality and quality a test scheme has been developed Before production all components have been radiation tested. After mounting the component the digitizer is tested at the producer in a specially designed reduced test-bench to verify the functionality. All digitizers are then passed through burn-in and are tested again in a full test-bench reproducing operational conditions using a custom designed software, which ensures that full functionality is maintained. Test data is stored in an auto-generated file for future reference. A similar test software is later used at Clermont-Ferrand where the drawer containing all detector electronics are assembled. Their test results
will be cross-referenced with the original test data entry.

Summary

During both neutron and ionizing radiation test the TileDMU, a gate array which is responsible for most of the digitizer functionallity, was continuously tested with an external test-bench and no errors was registered. All other components have been radiation tested where failing component types have been replaced by equivalent components with better radiation tolerance.

All TileDMUs are tested on the wafer and then more thoroughly after the packaging. The boards are tested for continuity and against unintentional shorts. A special test-bench has been developed for producing realistic stimuli to the mounted digitizer boards checking most of the functionality. Failing boards are diagnosed and fixed if the errors are simple. The delivered boards are then subjected to a Burn-in procedure for one week in 70 degrees centigrade while the boards are powered up.

Subsequent system tests use an Atlas TileCal drawer as test-bench. It is supposed to verify full functionality in Atlas, making sure that the board fulfils all specifications. All known previously encountered or envisaged malfunctions of the digitizer system can be detected automatically by the tests. They reproduce the production operation of the digitizers in the Atlas environment. For future references the full test reports of the boards are saved in a auto generated html file.

Calculated test time per board is 20 min giving 120 boards per week. The burn-in oven has 120 digitizer slots and last for one week. Therefor we
will just have one week extra latency because of the burn-in but no further delay. Boards that not pass the test are just put aside and easy
bugs will later be fixed. In case of need, the other malfunctioning boards will be fixed.