NOR Flash continues to be the technology of choice for embedded applications requiring a discrete non-volatile memory device. The low read latencies characteristic of NOR devices allow for both direct code execution and data storage in a single memory product. NAND Flash devices have become a viable alternative in systems where read latency and data integrity are not critical device characteristics. This paper describes NOR’s current trajectories and operational characteristics that will enable usage in future applications.
Process shrinks allow for extended product life cycles
Over the past decade the pace of density growth has continued at a remarkable rate (Figure 1). Density growth will continue into the future. In 2011 we will see the introduction of a 4Gb NOR device and the next generation process node (45nm) will be released to production in 2012. Work has already begun on the follow-up node at 32nm to continue the process development roadmap.
FIGURE 1: Spansion’s NOR Flash Density Growth over the Last Ten Years
NOR manufacturers have historically been aggressive in applying new process technologies not only to the highest density devices but also to reduce the die sizes of lower density legacy products. Spansion’s 65nm process is not only used for the highest density 2Gb (and upcoming 4Gb) products but is also being used to shrink products between 64Mb and 1Gb. The smaller die sizes resulting from the continued application of new process nodes extends the length of time that a product remains commercially viable. While the 1Mb Async NOR product (AM29F010B) referenced in Figure 2 uses an older process node (320nm), it should be noted that the device is compatible with the much older AMD AM29F010 that was released to production in 1992 (using an even older process node). NOR’s extended product life cycle and breadth of available densities is unique in the memory industry.
FIGURE 2: Available Densities of Candidate Memories
The longer product life cycles are also due to reduced pressure for a typical NOR application to migrate to the highest density node. Overall, NOR’s wide range of applications have a much broader distribution of density usage than NAND applications. NOR applications tend to vary widely in density requirements and migrate to the next higher density node rather than jumping to the highest density device. The bulk of NOR applications do not migrate to a leading edge density as compared to NAND applications that tend to rapidly migrate toward the highest available device density. NAND’s focus on leading edge densities often comes at the expense of lower density products (using older process nodes) that tend to become obsolete after relatively short life cycles.
Information is shared by www.irvs.info
VLSI IDEA INNOVATORS are an R&D organization who were in to research and development in the electronics field for many number of years .Now we are getting to training process with the syllabus structured in R&D manner . This is the 1st time in India an R&D organization getting in to training process.
Tuesday, May 31, 2011
The Future of NOR flash memory
Saturday, May 28, 2011
Time to exploit IDEs for hardware design and verification
Integrated Development Environments are solidly established in the software community. Eclipse, a major open source IDE, has been downloaded over 5 million times. Data published in 2008 by its rival, NetBeans, showed their IDE to be in use by over 2 million users worldwide. Commercial IDEs, such as Microsoft’s Visual Studio, are also popular.
The IDE’s advantages to software engineers are many, and most of them are relevant to engineers working on hardware verification. It is, after all, a software task. Languages used include C, C++ and SystemC, of course, but increasingly the dedicated verification languages e and SystemVerilog. Many aspects of hardware design are also language based, and can potentially benefit from the use of an IDE. Most digital designers use VHDL or Verilog, and Analog/Mixed-Signal (AMS) designers are increasingly taking up related languages such as VHDL-A and VerilogAMS.
By taking a closer look at the differences between the general software and the more specialized hardware verification communities, this article will attempt to explain the current growth and future prospects of IDEs in hardware verification. The implications for hardware design are also examined.
The modern IDE
A modern IDE is much more than a glorified editor, both in terms of features and architecture. Furthermore, highly optimized graphical interfaces make initial use quite intuitive, allowing advanced functions to be learnt as the need arises (more on this below).
Figure 1 presents a conceptual snapshot of an IDE’s main functions, with emphasis on the flow of information to the user from multiple sources. In addition, an IDE acts as a central point of control over tools in the environment. A key component of the IDE is therefore the context sensitive user interface. This must display the information and controls that the user needs at any point in time, while suppressing distracting information.
As well as being convenient, the IDE’s ability to control the toolchain, including simulators, has the advantage that different tools may be plugged into the system without changing the way in which they are driven. It therefore decreases the burden of learning new tool interfaces and increases the tool/vendor neutrality of the flow.
When oriented towards code development, a platform such as Eclipse may include, for example, toolchain management (compile, run/simulate, debug, revision control), code profiling and linting, function and class browsing/navigation, refactoring support (obsolescing terabytes of grep-sed-awk scripts!) and, of course, syntax and semantic checks with auto-completion. Please see the sidebar for more information on typical IDE features.
The fundamental, differentiating benefit of such systems, when compared to file editors and other file processing tools, is that they are aware of and manage the relationships between many different aspects of the design. That is, they operate on a project basis, relating multiple information sources to the file being worked on and communicating this to the user. Once an engineer has experienced the advantages of such a system over simple file-by-file editing, there is no looking back!
Information is shared by www.irvs.info
The IDE’s advantages to software engineers are many, and most of them are relevant to engineers working on hardware verification. It is, after all, a software task. Languages used include C, C++ and SystemC, of course, but increasingly the dedicated verification languages e and SystemVerilog. Many aspects of hardware design are also language based, and can potentially benefit from the use of an IDE. Most digital designers use VHDL or Verilog, and Analog/Mixed-Signal (AMS) designers are increasingly taking up related languages such as VHDL-A and VerilogAMS.
By taking a closer look at the differences between the general software and the more specialized hardware verification communities, this article will attempt to explain the current growth and future prospects of IDEs in hardware verification. The implications for hardware design are also examined.
The modern IDE
A modern IDE is much more than a glorified editor, both in terms of features and architecture. Furthermore, highly optimized graphical interfaces make initial use quite intuitive, allowing advanced functions to be learnt as the need arises (more on this below).
Figure 1 presents a conceptual snapshot of an IDE’s main functions, with emphasis on the flow of information to the user from multiple sources. In addition, an IDE acts as a central point of control over tools in the environment. A key component of the IDE is therefore the context sensitive user interface. This must display the information and controls that the user needs at any point in time, while suppressing distracting information.
As well as being convenient, the IDE’s ability to control the toolchain, including simulators, has the advantage that different tools may be plugged into the system without changing the way in which they are driven. It therefore decreases the burden of learning new tool interfaces and increases the tool/vendor neutrality of the flow.
When oriented towards code development, a platform such as Eclipse may include, for example, toolchain management (compile, run/simulate, debug, revision control), code profiling and linting, function and class browsing/navigation, refactoring support (obsolescing terabytes of grep-sed-awk scripts!) and, of course, syntax and semantic checks with auto-completion. Please see the sidebar for more information on typical IDE features.
The fundamental, differentiating benefit of such systems, when compared to file editors and other file processing tools, is that they are aware of and manage the relationships between many different aspects of the design. That is, they operate on a project basis, relating multiple information sources to the file being worked on and communicating this to the user. Once an engineer has experienced the advantages of such a system over simple file-by-file editing, there is no looking back!
Information is shared by www.irvs.info
Tuesday, May 24, 2011
Power aware verification of ARM-based designs
Power dissipation has become a key constraint for the design of today’s complex chips. Minimizing power dissipation is essential for battery-powered portable devices, as well as for reducing cooling requirements for non-portable systems. Such minimization requires active power management built into a device.
This article is from class ATC-155 at ARM Technology Conference. Click here for more information about the conference.
In a System-on-Chip (SoC) design with active power management, various subsystems can be independently powered up or down, and/or powered at different voltage levels. It is important to verify that the SoC works correctly under active power management.
When a given subsystem is turned off, its state will be lost, unless some or all of the state is explicitly retained during power down. When that subsystem is powered up again, it must either be reset, or it must restore its previous state from the retained state, or some combination thereof. When a subsystem is powered down, it must not interfere with the normal operation of the rest of the SoC.
Power aware verification is essential to verify the operation of a design under active power management, including the power management architecture, state retention and restoration of subsystems when powered down, and the interaction of subsystems in various power states. In this presentation, we summarize the challenges of power aware verification and describe the use of IEEE 1801-2009 Unified Power Format (UPF) to define power management architecture. We outline the requirements and essential coverage goals for verifying a power-managed ARM-based SoC design.
Critical design constraints
The continual scaling of transistors and the end of voltage scaling has made power one of the critical design constraints in the design flow. Trying to maintain performance levels and achieve faster speeds by scaling supply and threshold voltages increases the subthreshold leakage current due to exponential relationships with threshold voltage .
Leakage currents lead to power dissipation even when the circuit is not doing any useful work, which limits operation time between charges for battery-operated devices, and creates a heat dissipation problem for all devices.
Minimizing power dissipation starts with minimizing the dynamic power dissipation associated with the clock tree, by turning off the clock for subsystems that are not in use. This technique has been in use for many years. But at 90nm and below, static leakage becomes the dominant form of power dissipation. Active power management minimizes static leakage through various techniques, such as shutting off the power to unused subsystems or varying the supply voltage or threshhold voltage for a given component to achieve the functionality and performance required with minimum power.
Active power management
Active power management can be thought of as having three major aspects:
-the power management architecture, which involves the partitioning of the system into separately controlled power domains, and the logic required to power those domains; mediate their interactions, and control their behavior;
-the power managed behavior of the design, which involves the dynamic operation of power domains as they are powered up and down under active power management, as well as the dynamic interactions of those power domains to achieve system functionality;
-the power control logic that ultimately drives the control inputs to the power management architecture, which may be implemented in hardware or software or a combination thereof.
All three of these aspects need to be verified to ensure that the design will work properly under active power management. Ideally such verification should be done at the RTL stage. This enables verification of the active power management capability much more efficiently than would be possible at the gate level, which in turn allows more time for consideration of alternative power management architectures and simplifies debugging.
Power management techniques
Several power management techniques are used to minimize power dissipation: clock gating, power gating, voltage scaling, and body biasing are four of them. Clock gating disables the clock of an unused device, to eliminate dynamic power consumption by the clock tree. Power gating uses a current switch to cut off a circuit from its power supply rails during standby mode, to eliminate static leakage when the circuit is not in use.
Voltage scaling changes the voltage and clock frequency to match the performance required for a given operation so as to minimize leakage. Body biasing changes the threshhold voltage to reduce leakage current at the expense of slower switching times.
Power gating is one of the most common active power management techniques. Switching off the power to a subsystem when it is not in use eliminates the leakage current in that subsystem when it is powered down, and hence the overall leakage power dissipation through that subsystem is reduced. However, this technique also results in loss of state in the subsystem when it is switched off. Also, the outputs of a power domain can float to unpredictable values when they are powered down.
Another common technique is the use of different supply voltage levels for different subsystems. A subsystem that has a higher voltage supply can change state more quickly and therefore operate with higher performance, at the expense of higher static leakage and dynamic power.
A subsystem with a lower voltage supply cannot change state as quickly, and consequently operates with lower performance, but also with less static leakage and dynamic power. This technique allows designers to minimize static leakage in areas where higher performance is not required.
Multiple voltage supplies can also be used for a single subsystem, for example, by enabling it to dynamically switch between a higher voltage supply and a lower voltage supply.
This allows the system to select higher performance for that subsystem when necessary, but minimize static leakage when high performance is not required. Multi-voltage and power gating techniques can be combined to give a range of power/performance options.
All of these power management techniques must be implemented in a manner that preserves the intended functionality of the design. This requires creation of power management logic to ensure that the design operates correctly as the power supplies to its various components are switched on and off or switched between voltage levels. Since this power management logic could potentially affect the functionality of the design, it is important to verify the power management logic early in the design cycle, to avoid costly respins.
Power management specification
The power management architecture for a given design could be defined as part of the design, and ultimately it will be a part of the design’s implementation. A better approach, however, is to specify the power management architecture separate from the design. This simplifies exploration of alternative power management architectures, reduces the likelihood of unintended changes to the golden design functionality, and maintains the reusability of the design data.
This is the approach supported by IEEE 1801-2009, "Standard for Design and Verification of Low Power Integrated Circuits." This standard is also known as the Unified Power Format (UPF) version 2.0. Initially developed by Accellera, UPF is currently supported by multiple vendors and is in use worldwide.
UPF provides the concepts and notation required to define the power management architecture for a design. A UPF specification can be used to drive the implementation of power management for a given design, during synthesis or subsequent implementation steps.
A UPF specification can also be used to drive verification of power management, during RTL simulation, gate-level simulation, or even via static verification methods. The ability to use UPF in conjunction with RTL simulation enables early verification of the power management architecture. The ability to use UPF across all of these applications eases implementation and validation by enabling reuse of power management specifications throughout the flow.
UPF syntax is defined as an extension of Tcl, which enables UPF descriptions to leverage all of the control features of Tcl. UPF captures the power management architecture in a portable form for use in simulation, synthesis, and routing, reducing potential omissions during translation of that intent from tool to tool. Because it is separate from the HDL description and can be read by all of the tools in the flow, the UPF side file is as portable and interoperable as the logic design’s HDL code.
Information is shared by www.irvs.info
This article is from class ATC-155 at ARM Technology Conference. Click here for more information about the conference.
In a System-on-Chip (SoC) design with active power management, various subsystems can be independently powered up or down, and/or powered at different voltage levels. It is important to verify that the SoC works correctly under active power management.
When a given subsystem is turned off, its state will be lost, unless some or all of the state is explicitly retained during power down. When that subsystem is powered up again, it must either be reset, or it must restore its previous state from the retained state, or some combination thereof. When a subsystem is powered down, it must not interfere with the normal operation of the rest of the SoC.
Power aware verification is essential to verify the operation of a design under active power management, including the power management architecture, state retention and restoration of subsystems when powered down, and the interaction of subsystems in various power states. In this presentation, we summarize the challenges of power aware verification and describe the use of IEEE 1801-2009 Unified Power Format (UPF) to define power management architecture. We outline the requirements and essential coverage goals for verifying a power-managed ARM-based SoC design.
Critical design constraints
The continual scaling of transistors and the end of voltage scaling has made power one of the critical design constraints in the design flow. Trying to maintain performance levels and achieve faster speeds by scaling supply and threshold voltages increases the subthreshold leakage current due to exponential relationships with threshold voltage .
Leakage currents lead to power dissipation even when the circuit is not doing any useful work, which limits operation time between charges for battery-operated devices, and creates a heat dissipation problem for all devices.
Minimizing power dissipation starts with minimizing the dynamic power dissipation associated with the clock tree, by turning off the clock for subsystems that are not in use. This technique has been in use for many years. But at 90nm and below, static leakage becomes the dominant form of power dissipation. Active power management minimizes static leakage through various techniques, such as shutting off the power to unused subsystems or varying the supply voltage or threshhold voltage for a given component to achieve the functionality and performance required with minimum power.
Active power management
Active power management can be thought of as having three major aspects:
-the power management architecture, which involves the partitioning of the system into separately controlled power domains, and the logic required to power those domains; mediate their interactions, and control their behavior;
-the power managed behavior of the design, which involves the dynamic operation of power domains as they are powered up and down under active power management, as well as the dynamic interactions of those power domains to achieve system functionality;
-the power control logic that ultimately drives the control inputs to the power management architecture, which may be implemented in hardware or software or a combination thereof.
All three of these aspects need to be verified to ensure that the design will work properly under active power management. Ideally such verification should be done at the RTL stage. This enables verification of the active power management capability much more efficiently than would be possible at the gate level, which in turn allows more time for consideration of alternative power management architectures and simplifies debugging.
Power management techniques
Several power management techniques are used to minimize power dissipation: clock gating, power gating, voltage scaling, and body biasing are four of them. Clock gating disables the clock of an unused device, to eliminate dynamic power consumption by the clock tree. Power gating uses a current switch to cut off a circuit from its power supply rails during standby mode, to eliminate static leakage when the circuit is not in use.
Voltage scaling changes the voltage and clock frequency to match the performance required for a given operation so as to minimize leakage. Body biasing changes the threshhold voltage to reduce leakage current at the expense of slower switching times.
Power gating is one of the most common active power management techniques. Switching off the power to a subsystem when it is not in use eliminates the leakage current in that subsystem when it is powered down, and hence the overall leakage power dissipation through that subsystem is reduced. However, this technique also results in loss of state in the subsystem when it is switched off. Also, the outputs of a power domain can float to unpredictable values when they are powered down.
Another common technique is the use of different supply voltage levels for different subsystems. A subsystem that has a higher voltage supply can change state more quickly and therefore operate with higher performance, at the expense of higher static leakage and dynamic power.
A subsystem with a lower voltage supply cannot change state as quickly, and consequently operates with lower performance, but also with less static leakage and dynamic power. This technique allows designers to minimize static leakage in areas where higher performance is not required.
Multiple voltage supplies can also be used for a single subsystem, for example, by enabling it to dynamically switch between a higher voltage supply and a lower voltage supply.
This allows the system to select higher performance for that subsystem when necessary, but minimize static leakage when high performance is not required. Multi-voltage and power gating techniques can be combined to give a range of power/performance options.
All of these power management techniques must be implemented in a manner that preserves the intended functionality of the design. This requires creation of power management logic to ensure that the design operates correctly as the power supplies to its various components are switched on and off or switched between voltage levels. Since this power management logic could potentially affect the functionality of the design, it is important to verify the power management logic early in the design cycle, to avoid costly respins.
Power management specification
The power management architecture for a given design could be defined as part of the design, and ultimately it will be a part of the design’s implementation. A better approach, however, is to specify the power management architecture separate from the design. This simplifies exploration of alternative power management architectures, reduces the likelihood of unintended changes to the golden design functionality, and maintains the reusability of the design data.
This is the approach supported by IEEE 1801-2009, "Standard for Design and Verification of Low Power Integrated Circuits." This standard is also known as the Unified Power Format (UPF) version 2.0. Initially developed by Accellera, UPF is currently supported by multiple vendors and is in use worldwide.
UPF provides the concepts and notation required to define the power management architecture for a design. A UPF specification can be used to drive the implementation of power management for a given design, during synthesis or subsequent implementation steps.
A UPF specification can also be used to drive verification of power management, during RTL simulation, gate-level simulation, or even via static verification methods. The ability to use UPF in conjunction with RTL simulation enables early verification of the power management architecture. The ability to use UPF across all of these applications eases implementation and validation by enabling reuse of power management specifications throughout the flow.
UPF syntax is defined as an extension of Tcl, which enables UPF descriptions to leverage all of the control features of Tcl. UPF captures the power management architecture in a portable form for use in simulation, synthesis, and routing, reducing potential omissions during translation of that intent from tool to tool. Because it is separate from the HDL description and can be read by all of the tools in the flow, the UPF side file is as portable and interoperable as the logic design’s HDL code.
Information is shared by www.irvs.info
Monday, May 23, 2011
An introduction to precision analysis of high-speed serial systems and components
Advances in multi-gigabit data transfer have been dominated by serial data technologies like USB (Universal Serial Bus) and PCI Ex (Peripheral Component Interface Express), as well as technologies that have converted from parallel to serial, like SAS (Serial Attached SCSI).
Now, as we leap another order of magnitude in data rate from the 3rd generations of serial technologies like USB3 at 5 Gb/s and PCI Ex Gen 3 at 8 Gb/s to 40 and 100 Gb/s Ethernet (40 GbE and 100 GbE), parallel architectures are coming back. The most ambitious is 100 GbE’s four lanes at 25 Gb/s each.
In this paper we start with a quick, high level view of the emerging multi-Gb/s architectures and then delve into the tricks that make them work and how to analyze them.
High-speed serial systems are analyzed with at least one of three goals: diagnostics, compliance, or functional test. Compliance testing is an exhaustive checklist of performance benchmarks designed to assure the interoperability of components made by different vendors. Diagnostics, or hardware debug, involves providing well-understood conditions so that problems can be traced to their causes. Functional test, usually associated with manufacturing, employs a limited number of fast tests to determine if a product works.
Figure 1 shows the essential components of HSS (High Speed Serial) technology. The transmitter serializes a parallel data stream and transmits it through a channel that typically includes conducting cables and backplanes, and/or optic fibers. The most challenging technology is at the receiver because ones and zeros can’t be distinguished at these data rates with a simple slicer. Eye diagrams of signals at several Gb/s are more often than not closed. Tricks at the transmitter, like preemphasis and de-emphasis, and equalization at the receiver effectively reopen the eye so that symbols can be accurately decoded.
Principal components of a high speed serial system.
Information is shared by www.irvs.info
Now, as we leap another order of magnitude in data rate from the 3rd generations of serial technologies like USB3 at 5 Gb/s and PCI Ex Gen 3 at 8 Gb/s to 40 and 100 Gb/s Ethernet (40 GbE and 100 GbE), parallel architectures are coming back. The most ambitious is 100 GbE’s four lanes at 25 Gb/s each.
In this paper we start with a quick, high level view of the emerging multi-Gb/s architectures and then delve into the tricks that make them work and how to analyze them.
High-speed serial systems are analyzed with at least one of three goals: diagnostics, compliance, or functional test. Compliance testing is an exhaustive checklist of performance benchmarks designed to assure the interoperability of components made by different vendors. Diagnostics, or hardware debug, involves providing well-understood conditions so that problems can be traced to their causes. Functional test, usually associated with manufacturing, employs a limited number of fast tests to determine if a product works.
Figure 1 shows the essential components of HSS (High Speed Serial) technology. The transmitter serializes a parallel data stream and transmits it through a channel that typically includes conducting cables and backplanes, and/or optic fibers. The most challenging technology is at the receiver because ones and zeros can’t be distinguished at these data rates with a simple slicer. Eye diagrams of signals at several Gb/s are more often than not closed. Tricks at the transmitter, like preemphasis and de-emphasis, and equalization at the receiver effectively reopen the eye so that symbols can be accurately decoded.
Principal components of a high speed serial system.
Information is shared by www.irvs.info
Thursday, May 19, 2011
Design optimization of flip-chip packages integrating USB 3.0
As the speeds of various SerDes interfaces move into the multi-gigabits/sec range, more ASIC chips are being designed to have multiple high speed interfaces such as USB 3.0, PCIE Gen3, DDR3, and others. No longer is package design just a layout exercise or lumped model extraction.
Package design flow
It’s now more important to understand the interaction between the bumps, traces, vias, and solder balls in a flip chip package — or wirebonds, traces, vias, and solderballs in a wirebond package — to optimize the package layout and design before committing to high volume production. Today’s requirement is full 3D electromagnetic simulation (EM) and modeling to optimize the package design for crosstalk, reflection, and insertion loss. The package can no longer be designed “by itself” but has to be designed in conjunction with both the silicon chip and the system board, an approach commonly known as chip-package-board co-design. Let’s look at some important design considerations and an effective high-speed methodology successfully employed for the design of a package with a USB 3.0 interface using 3D EM modeling and simulation.
Flip chip package design example using IE3D for USB 3.0
USB 3.0 is a dual bus architecture that incorporates USB 2.0 plus a super-speed data bus. The super-speed data bus employs differential signals and has a speed of 5 Gbits/sec. One of the initial design goals for the USB 3.0 differential traces is an S11 (reflection loss) parameter of 15 dB or less at 2.5 GHz or higher and a minimum insertion loss S12 of less than 0.5 dB.
Figure 1 shows a four-layer flip chip package to be used as a design example. This design was constructed using the package design software in Mentor Graphics IE3D flow. This example is a BGA package using build-up substrate technology. The vias encompass blind, buried, and through types. Also, via-in-pad technology is used for routing from the flip chip bumps to the inner layers on the BGA package.
Figure 1: Four-layer flip chip package stack up example design
Chip bump coordinates and netlists are generally provided in the form of a Microsoft Excel spreadsheet. The data is read into package design software. A die symbol and a package symbol are created. This is the first step in the package substrate layout. High speed and critical nets are routed first, from solder bump to solder ball.
The layout of the critical nets and high speed nets in the package design software is shown in Figure 2. These high speed nets are routed as differential nets and length matching between the pairs is done within 25 μm. These nets are routed on layer 1. Layer 2 is a ground plane layer, which provides the return path for all the signals, differential as well as single ended.
Figure 2: Top layer layout of the package showing the high-speed nets
Information is shared by www.irvs.info
Package design flow
It’s now more important to understand the interaction between the bumps, traces, vias, and solder balls in a flip chip package — or wirebonds, traces, vias, and solderballs in a wirebond package — to optimize the package layout and design before committing to high volume production. Today’s requirement is full 3D electromagnetic simulation (EM) and modeling to optimize the package design for crosstalk, reflection, and insertion loss. The package can no longer be designed “by itself” but has to be designed in conjunction with both the silicon chip and the system board, an approach commonly known as chip-package-board co-design. Let’s look at some important design considerations and an effective high-speed methodology successfully employed for the design of a package with a USB 3.0 interface using 3D EM modeling and simulation.
Flip chip package design example using IE3D for USB 3.0
USB 3.0 is a dual bus architecture that incorporates USB 2.0 plus a super-speed data bus. The super-speed data bus employs differential signals and has a speed of 5 Gbits/sec. One of the initial design goals for the USB 3.0 differential traces is an S11 (reflection loss) parameter of 15 dB or less at 2.5 GHz or higher and a minimum insertion loss S12 of less than 0.5 dB.
Figure 1 shows a four-layer flip chip package to be used as a design example. This design was constructed using the package design software in Mentor Graphics IE3D flow. This example is a BGA package using build-up substrate technology. The vias encompass blind, buried, and through types. Also, via-in-pad technology is used for routing from the flip chip bumps to the inner layers on the BGA package.
Figure 1: Four-layer flip chip package stack up example design
Chip bump coordinates and netlists are generally provided in the form of a Microsoft Excel spreadsheet. The data is read into package design software. A die symbol and a package symbol are created. This is the first step in the package substrate layout. High speed and critical nets are routed first, from solder bump to solder ball.
The layout of the critical nets and high speed nets in the package design software is shown in Figure 2. These high speed nets are routed as differential nets and length matching between the pairs is done within 25 μm. These nets are routed on layer 1. Layer 2 is a ground plane layer, which provides the return path for all the signals, differential as well as single ended.
Figure 2: Top layer layout of the package showing the high-speed nets
Information is shared by www.irvs.info
Wednesday, May 18, 2011
Advances in integration for base station receivers
The increasing demand for data services on mobile phones puts continuous pressure on base station designs for more bandwidth and lower cost. Many factors influence the overall cost to install and operate additional base stations to serve the increased demand. Smaller, lower power electronics within a macrocell base station help to lower the initial costs as well as the ongoing cost of real estate rental and electrical power consumption for the tower. New architectures such as remote radio heads (RRH) promise to decrease costs even further. Tiny picocell and femtocell base stations extend the services to areas not covered by the larger macrocells. To realize these gains, base station designers need new components with very high levels of integration and yet they cannot compromise performance.
Integration in the RF portion of the radio is especially challenging because of the performance requirement. Over a decade ago, the typical base station architecture required several stages of low noise amplification, down-conversion to an intermediate frequency (IF), filtering and further amplification. Higher performance mixers, amplifiers and higher dynamic range analog-to-digital converters (ADCs) with higher sampling rates have enabled designers to eliminate down-conversion stages to a single IF stage today. However, component integration remains somewhat limited. Mixers are available with buffered IF outputs, integrated balun transformers, LO switches and dividers. A device with a mixer and a PLL for the LO represents a recent advance of integration. Dual mixers and dual amplifiers are available. As yet, no device is available that integrates any portion of the RF chain with the ADC on the same silicon. This is primarily because each component requires unique semiconductor processes. The performance trade-off associated with choosing a common process has been unacceptable for the application.
In parallel, the handset radio has evolved to highly integrated baseband and transceiver ICs and integrated RF front-end modules (FEM). RF functional blocks between the transceiver and antenna include filtering, amplification and switching (with impedance matching incorporated between components where needed). The transceiver integrates the receiver ADC, the transmit DAC and the associated RF blocks. Here the performance requirement is at a level such that a common process is viable. The FEM utilizes a system-in-package (SiP) technology to integrate various ICs and passives, including multi-mode filters and the RF switches for transmit and receive. Here, a common process was not viable but integration was still required.
The performance requirements for the RF/IF, ADC and DAC components in picocell and femtocell base stations tends to be much lower than for macrocell base stations because their range, power output and number of users per sector are lower than for macrocells. In some cases, modified versions of components for handsets can be used for picocell or femtocell base stations, providing the necessary integration, low power and low cost. Here, a common semiconductor process provides sufficient level of performance for all of the functional blocks in the signal chain.
Information is shared by www.irvs.info
Integration in the RF portion of the radio is especially challenging because of the performance requirement. Over a decade ago, the typical base station architecture required several stages of low noise amplification, down-conversion to an intermediate frequency (IF), filtering and further amplification. Higher performance mixers, amplifiers and higher dynamic range analog-to-digital converters (ADCs) with higher sampling rates have enabled designers to eliminate down-conversion stages to a single IF stage today. However, component integration remains somewhat limited. Mixers are available with buffered IF outputs, integrated balun transformers, LO switches and dividers. A device with a mixer and a PLL for the LO represents a recent advance of integration. Dual mixers and dual amplifiers are available. As yet, no device is available that integrates any portion of the RF chain with the ADC on the same silicon. This is primarily because each component requires unique semiconductor processes. The performance trade-off associated with choosing a common process has been unacceptable for the application.
In parallel, the handset radio has evolved to highly integrated baseband and transceiver ICs and integrated RF front-end modules (FEM). RF functional blocks between the transceiver and antenna include filtering, amplification and switching (with impedance matching incorporated between components where needed). The transceiver integrates the receiver ADC, the transmit DAC and the associated RF blocks. Here the performance requirement is at a level such that a common process is viable. The FEM utilizes a system-in-package (SiP) technology to integrate various ICs and passives, including multi-mode filters and the RF switches for transmit and receive. Here, a common process was not viable but integration was still required.
The performance requirements for the RF/IF, ADC and DAC components in picocell and femtocell base stations tends to be much lower than for macrocell base stations because their range, power output and number of users per sector are lower than for macrocells. In some cases, modified versions of components for handsets can be used for picocell or femtocell base stations, providing the necessary integration, low power and low cost. Here, a common semiconductor process provides sufficient level of performance for all of the functional blocks in the signal chain.
Information is shared by www.irvs.info
Saturday, May 14, 2011
Layout & bypass guidelines for high performance video amp/filter boards
Integrated circuit amplifiers are one of the most basic building blocks in any designer’s tool box and represent one of the most versatile products available.
An amplifier can provide a wide variety of functions such as driving ADCs, drive multiple video loads, operate as video or other types of filter, drive high speed instrumentation signals, and much more. They can also act as oscillators, but that can be actually be a problem since an amplifier should oscillate only when the designer wants it to.
However, an amplifier can have a mind of its own and oscillate at will if the board is designed incorrectly. So what should a designer do to protect against unwanted oscillation? The main thing is to recall the lessons of early electronics classes which taught that oscillation a function of capacitance, inductance, and feedback.
So, the key is to make any extraneous capacitive and inductive feedback paths are reduced or eliminated by designing the board carefully. This is especially important for higher speed amplifiers (greater than 50MHz).
Invisible sources of capacitance and inductance can come from the board, the load (especially if the load is capacitive), and/or the layout. Furthermore, currents flowing to bypass capacitors on different places on the board can take different paths which can lead to distortion.
So, ironically, some techniques for reducing distortion are the opposite of what is recommended to guard against oscillation. (A designer’s job is never easy, is it?) So what are some of the layout considerations to keep in mind to keep everything in balance and fight distortion and oscillation when designing in an amplifier or video filter?
Looking first at oscillation, when directly driving a capacitive load with an amplifier, the load together with amplifier's output impedance creates phase lag which can cause peaking or oscillation.
Some amplifiers have the ability to drive capacitive loads, but for those that cannot, a small series resistance (Rs) placed at the amplifier’s output can enhance stability and settling time (Figure 1 below).
Figure 1: A small series resistance (Rs) placed at the amplifier’s output can enhance stability and settling time.
When driving a transmission cable with an inherent impedance (such as a coax cable) as shown in Figure 2 below, resistors Rs and RL should be set equal to the cable’s impedance (Zo), and the capacitor C should be set to match the cable impedance over a wide frequency range to compensate for the amplifier's increasing output impedance (with increasing frequency).
Figure 2: Driving cable or transmission line.
This information is shared by www.irvs.info
An amplifier can provide a wide variety of functions such as driving ADCs, drive multiple video loads, operate as video or other types of filter, drive high speed instrumentation signals, and much more. They can also act as oscillators, but that can be actually be a problem since an amplifier should oscillate only when the designer wants it to.
However, an amplifier can have a mind of its own and oscillate at will if the board is designed incorrectly. So what should a designer do to protect against unwanted oscillation? The main thing is to recall the lessons of early electronics classes which taught that oscillation a function of capacitance, inductance, and feedback.
So, the key is to make any extraneous capacitive and inductive feedback paths are reduced or eliminated by designing the board carefully. This is especially important for higher speed amplifiers (greater than 50MHz).
Invisible sources of capacitance and inductance can come from the board, the load (especially if the load is capacitive), and/or the layout. Furthermore, currents flowing to bypass capacitors on different places on the board can take different paths which can lead to distortion.
So, ironically, some techniques for reducing distortion are the opposite of what is recommended to guard against oscillation. (A designer’s job is never easy, is it?) So what are some of the layout considerations to keep in mind to keep everything in balance and fight distortion and oscillation when designing in an amplifier or video filter?
Looking first at oscillation, when directly driving a capacitive load with an amplifier, the load together with amplifier's output impedance creates phase lag which can cause peaking or oscillation.
Some amplifiers have the ability to drive capacitive loads, but for those that cannot, a small series resistance (Rs) placed at the amplifier’s output can enhance stability and settling time (Figure 1 below).
Figure 1: A small series resistance (Rs) placed at the amplifier’s output can enhance stability and settling time.
When driving a transmission cable with an inherent impedance (such as a coax cable) as shown in Figure 2 below, resistors Rs and RL should be set equal to the cable’s impedance (Zo), and the capacitor C should be set to match the cable impedance over a wide frequency range to compensate for the amplifier's increasing output impedance (with increasing frequency).
Figure 2: Driving cable or transmission line.
This information is shared by www.irvs.info
Thursday, May 12, 2011
Designing reliable capacitive-touch interfaces
Capacitive sensing offers an intuitive and robust interface that increases product reliability by eliminating mechanical parts in many appliances (also called "white goods") and instrumentation. Because of their experience with personal electronic devices, many consumers are used to touch interfaces based on capacitive sensing, and they have come to expect these interfaces to be reliable and operate accurately.
Capacitive technology, however, is affected by environmental noise and other factors which can cause systems to not respond to finger touches or to trigger false touches. Unless developers tune sensors, accuracy and reliability can be severely reduced. By understanding how capacitive sensors work and how they can be designed to self-tune themselves to compensate for noise, developers can build robust systems that make their appliances more reliable, cost-effective, and easier to use.
Capacitive Sensing
To understand the challenges behind designing a robust user interface, it helps to first take a brief look at the technology behind a capacitive measurement system. Figure 1 shows a cross-sectional view of a capacitive sensor board.
Figure 1: Cross-sectional view of a capacitive-sensing board
To sense the presence of a finger, a capacitive sensing system must first know the sensor capacitance in the absence of a finger (see Figure 2a), also known as the parasitic capacitance (Cp). When a finger approaches or touches the sensor (see Figure 2b), the sensor capacitance will change, resulting in another capacitance called the finger capacitance (Cf) in parallel to the Cp. In the presence of a finger, the total sensor capacitance (Cx) is given by Equation 1:
Cx = Cp +_ Cf – Equation 1
Figure 2(a): Sensor capacitance in the absence of finger
Figure 2(b): Sensor capacitance in the presence of finger
To be able to analyze the sensor capacitance using a microcontroller, the sensor capacitance (Cx) needs to be converted into a digital value. Figure 3 shows the block diagram of one of the capacitive sensing preprocessing circuit. (Note: There are several methods for measuring sensor capacitance.)
Figure 3: Pre-processing circuit for capacitance measurement
This system uses a switched capacitor block that emulates the sensor capacitance Cx using a resistance Req, a programmable current source (Idac), an external capacitor (Cmod), and a precision analog comparator. The Idac charges Cmod continuously until the voltage on Cmod crosses Vref and the comparator output is high. The Idac is then disconnected and Cmod discharges through Req until the voltage on Cmod drops below Vref. The comparator output is now low until Cmod charges to Vref again. Cx will be greater in the presence of a finger and the emulated Req will be less according to Equation 2:
Req = 1/FsCx – Equation 2
where Fs is the switching frequency of the switched capacitor block.
Thus, when a finger is present, Cmod discharges faster and the comparator output stays high for a shorter time. This means that a higher capacitance value corresponds to a shorter high time for the comparator. The resulting bit stream as shown in Figure 1 can be fed to a counter for a fixed amount of time. This counter value or “raw counts” provides an indication of the magnitude of Cx.
The fixed amount of time for which the counter counts also determines the number of raw counts and can be referred to as the resolution. When the resolution is increased, the counter counts for a longer period of time and this increases the raw counts. Put another way, resolution is also the highest number of raw counts possible.
Tuning
Figure 4 shows the design flow for a capacitive sensor touch interface. However, capacitive sensors must operate in the real world where variations in components, environmental operating conditions, and noise can impact sensor performance and reliability.
Tuning is a critical process for ensuring that a sensor functions correctly and consistently. This is achieved by identifying and determining optimum values for a set of sensor parameters to maintain a sufficient signal-to-noise ratio (SNR) and finger threshold. In general, a 5:1 SNR is the minimum requirement for a robust sensor design (see Figure 5). To avoid false triggering caused by changes in capacitive due to atmospheric changes, a finger threshold of between 65-80% of the signal strength is recommended to ensure reliable finger detection.
Figure 5: Raw sensor data is comprised of finger response and noise. Finger response, also called signal strength, is the difference in raw counts seen by the sensing system when a finger is placed on the sensor.
While sensor controller manufacturers provide guidelines to aid engineers in the tuning process, achieve the ideal tuning parameters for the system involves an iterative process. For a sensor controller with a capacitive sensing algorithm implemented similar to the one shown in Figure 3, the tuning procedure will follow the steps shown in Figure 6.
Developers can implement tuning parameters either by writing code specific to the operation of the sensors in firmware, through external components, or by configuring the controller. With a firmware approach, developers have flexibility; however, whenever tuning parameters need to be changed, the firmware also needs to be modified and updated.
Alternatively, designers can simplify system firmware development by utilizing a fixed-function/non-programmable capacitive sensor controller. Tuning parameters, in this case, must either be implemented using external components on the board or by sending configuration data over a communication interface such as I2C.
With this approach, whenever tuning parameters need to be changed, either the user interface board need to be reworked or configuration data needs to be updated. Developers need to be aware that tuning can be time-consuming, especially if the PCB or overlay needs to be changed between iterations.
Information is shared by www.ideasroad.com
Capacitive technology, however, is affected by environmental noise and other factors which can cause systems to not respond to finger touches or to trigger false touches. Unless developers tune sensors, accuracy and reliability can be severely reduced. By understanding how capacitive sensors work and how they can be designed to self-tune themselves to compensate for noise, developers can build robust systems that make their appliances more reliable, cost-effective, and easier to use.
Capacitive Sensing
To understand the challenges behind designing a robust user interface, it helps to first take a brief look at the technology behind a capacitive measurement system. Figure 1 shows a cross-sectional view of a capacitive sensor board.
Figure 1: Cross-sectional view of a capacitive-sensing board
To sense the presence of a finger, a capacitive sensing system must first know the sensor capacitance in the absence of a finger (see Figure 2a), also known as the parasitic capacitance (Cp). When a finger approaches or touches the sensor (see Figure 2b), the sensor capacitance will change, resulting in another capacitance called the finger capacitance (Cf) in parallel to the Cp. In the presence of a finger, the total sensor capacitance (Cx) is given by Equation 1:
Cx = Cp +_ Cf – Equation 1
Figure 2(a): Sensor capacitance in the absence of finger
Figure 2(b): Sensor capacitance in the presence of finger
To be able to analyze the sensor capacitance using a microcontroller, the sensor capacitance (Cx) needs to be converted into a digital value. Figure 3 shows the block diagram of one of the capacitive sensing preprocessing circuit. (Note: There are several methods for measuring sensor capacitance.)
Figure 3: Pre-processing circuit for capacitance measurement
This system uses a switched capacitor block that emulates the sensor capacitance Cx using a resistance Req, a programmable current source (Idac), an external capacitor (Cmod), and a precision analog comparator. The Idac charges Cmod continuously until the voltage on Cmod crosses Vref and the comparator output is high. The Idac is then disconnected and Cmod discharges through Req until the voltage on Cmod drops below Vref. The comparator output is now low until Cmod charges to Vref again. Cx will be greater in the presence of a finger and the emulated Req will be less according to Equation 2:
Req = 1/FsCx – Equation 2
where Fs is the switching frequency of the switched capacitor block.
Thus, when a finger is present, Cmod discharges faster and the comparator output stays high for a shorter time. This means that a higher capacitance value corresponds to a shorter high time for the comparator. The resulting bit stream as shown in Figure 1 can be fed to a counter for a fixed amount of time. This counter value or “raw counts” provides an indication of the magnitude of Cx.
The fixed amount of time for which the counter counts also determines the number of raw counts and can be referred to as the resolution. When the resolution is increased, the counter counts for a longer period of time and this increases the raw counts. Put another way, resolution is also the highest number of raw counts possible.
Tuning
Figure 4 shows the design flow for a capacitive sensor touch interface. However, capacitive sensors must operate in the real world where variations in components, environmental operating conditions, and noise can impact sensor performance and reliability.
Tuning is a critical process for ensuring that a sensor functions correctly and consistently. This is achieved by identifying and determining optimum values for a set of sensor parameters to maintain a sufficient signal-to-noise ratio (SNR) and finger threshold. In general, a 5:1 SNR is the minimum requirement for a robust sensor design (see Figure 5). To avoid false triggering caused by changes in capacitive due to atmospheric changes, a finger threshold of between 65-80% of the signal strength is recommended to ensure reliable finger detection.
Figure 5: Raw sensor data is comprised of finger response and noise. Finger response, also called signal strength, is the difference in raw counts seen by the sensing system when a finger is placed on the sensor.
While sensor controller manufacturers provide guidelines to aid engineers in the tuning process, achieve the ideal tuning parameters for the system involves an iterative process. For a sensor controller with a capacitive sensing algorithm implemented similar to the one shown in Figure 3, the tuning procedure will follow the steps shown in Figure 6.
Developers can implement tuning parameters either by writing code specific to the operation of the sensors in firmware, through external components, or by configuring the controller. With a firmware approach, developers have flexibility; however, whenever tuning parameters need to be changed, the firmware also needs to be modified and updated.
Alternatively, designers can simplify system firmware development by utilizing a fixed-function/non-programmable capacitive sensor controller. Tuning parameters, in this case, must either be implemented using external components on the board or by sending configuration data over a communication interface such as I2C.
With this approach, whenever tuning parameters need to be changed, either the user interface board need to be reworked or configuration data needs to be updated. Developers need to be aware that tuning can be time-consuming, especially if the PCB or overlay needs to be changed between iterations.
Information is shared by www.ideasroad.com
Wednesday, May 11, 2011
Measure the input capacitance of your op amp
Op amps with low input capacitance are required in applications such as smoke detectors, photodiode transimpedance amplifiers, medical instrumentation, industrial control systems, and the piezo-sensor interface. CMOS-input op amps, for instance, require minimal input capacitance when amplifying capacitive-sensor outputs or the small signals from high-impedance sources.
Input capacitance also affects a pole in the feedback path that can cause instability in high-gain, high-frequency applications. By minimizing this input capacitance, you may be able to increase the corresponding pole frequency until it has a negligible effect on the circuit.
Measuring the input capacitance of an op amp isn’t trivial, however; especially if the value is only a few picofarads. Such low values also present difficulties in screening the op amps during production testing. Hence, semiconductor companies often provide only typical values for this parameter, using simulation results and bench measurements on a few known good units. The following discussion can provide a sanity check in the lab by assisting the system-level designer or QA engineer to accurately determine the input capacitance for any op amp.
The direct approach of observing input capacitance on a multimeter isn’t practical below a few nanofarads. A simple yet effective alternative is to insert a large resistor in series with the op-amp input (Figure 1).
Figure 1: A resistor in series with an op amp input enables measurement of the op amp’s input capacitance.
Plotting the frequency response of the resulting first-order lowpass RC filter on a network analyzer (i.e., a Bode plot) lets you calculate the op amp’s input capacitance. Sounds simple, but you must follow precautions to ensure that the measurement accuracy isn’t compromised by stray capacitance in the PC board (PCB) and the test setup.
Follow these tips to minimize stray parasitics:
* Increase the measurement resolution by using only low-capacitance FET probes (<1pF), such as the Tektronix P6245.
* If the series resistor is a surface-mount component, ensure that the board capacitance to ground is as low as possible. (This implies no ground-plane layer beneath the input signal traces and the series resistor.)
* If the series resistor is a through-hole component, bend the input pin so it does not contact the PCB board, and use a short lead length to solder the resistor directly to the op amp input pin.
* Do not use a breadboard in the test setup, because capacitance between the breadboard tracks and jumper wires can degrade the measurement accuracy.
* Use short traces at the input to minimize series inductance.
The hardware recommended for this test setup (Figure 2) includes an Agilent 4395A network analyzer, a Mini-Circuits ZFRSC-2050 power splitter, and a Tektronix P6245 active FET probe.
Figure 2: Test setup for measuring op-amp input capacitance.
First, calibrate the setup with no op amp installed on the PCB. From the resulting Bode plot, you can calculate stray capacitance as Equation 1:
where f1(-3 db) is the corner frequency as measured on the network analyzer with no op amp installed, and RTH1 is the Thevenin-equivalent series resistance. RTH1 is a function of the inserted series resistor, the input termination resistance (50Ω), and the source impedance at the power splitter (50Ω), Equation 2:
Next, install the op amp on the PCB. Since the board’s stray capacitance is in parallel with the op amp’s input capacitance, Equation 1 becomes Equation 3:
where f2(-3 db) is the corner frequency as measured on the spectrum analyzer with the op amp installed, and RTH2 is the Thevenin-equivalent series resistance.
This Thevenin equivalent resistance is a function of the inserted series resistor, the input termination resistance (50Ω), output impedance of the power splitter (50Ω), and the common mode input impedance of the op amp, Equation 4:
The input common mode impedance of an op amp is not accurately known. For a CMOS-input op amp, however, it is fairly easy to select RSERIES << RCM. Then RTH2 ≈ RTH1, and Equation 3 can be rewritten as Equation 5
You can now calculate the op amp’s input capacitance from Equations 1 and 5, and verify the value by repeating the experiment with two different values of series resistor.
To illustrate the method, consider an input-capacitance measurement for the MAX4238 op amp.
Figure 3 shows the amplitude response from Figure 2 using a 200 kΩ series resistor and no op amp installed on the PCB, and Figure 4 shows the amplitude response with the MAX4238 installed.
Figure 3: Amplitude response from Figure 2, with RSERIES = 200 kΩ and no op amp installed on the PCB. The f1(-3dB) frequency is indicated by the downward-pointing arrow.
Figure 4: Amplitude Response from Figure 2, with RSERIES = 200 kΩ and the MAX4238 op amp installed. The f2(-3dB) frequency is indicated by the downward-pointing arrow.
Table 1 summarizes the results, using the frequency-response waveforms and calculations from Equations 1 and 5. As a sanity check, the measurement was repeated with a different series resistor value to demonstrate that a similar result (»4pF) is obtained.
Table 1: Summary of MAX4238 input-capacitance measurements
Input capacitance also affects a pole in the feedback path that can cause instability in high-gain, high-frequency applications. By minimizing this input capacitance, you may be able to increase the corresponding pole frequency until it has a negligible effect on the circuit.
Measuring the input capacitance of an op amp isn’t trivial, however; especially if the value is only a few picofarads. Such low values also present difficulties in screening the op amps during production testing. Hence, semiconductor companies often provide only typical values for this parameter, using simulation results and bench measurements on a few known good units. The following discussion can provide a sanity check in the lab by assisting the system-level designer or QA engineer to accurately determine the input capacitance for any op amp.
The direct approach of observing input capacitance on a multimeter isn’t practical below a few nanofarads. A simple yet effective alternative is to insert a large resistor in series with the op-amp input (Figure 1).
Figure 1: A resistor in series with an op amp input enables measurement of the op amp’s input capacitance.
Plotting the frequency response of the resulting first-order lowpass RC filter on a network analyzer (i.e., a Bode plot) lets you calculate the op amp’s input capacitance. Sounds simple, but you must follow precautions to ensure that the measurement accuracy isn’t compromised by stray capacitance in the PC board (PCB) and the test setup.
Follow these tips to minimize stray parasitics:
* Increase the measurement resolution by using only low-capacitance FET probes (<1pF), such as the Tektronix P6245.
* If the series resistor is a surface-mount component, ensure that the board capacitance to ground is as low as possible. (This implies no ground-plane layer beneath the input signal traces and the series resistor.)
* If the series resistor is a through-hole component, bend the input pin so it does not contact the PCB board, and use a short lead length to solder the resistor directly to the op amp input pin.
* Do not use a breadboard in the test setup, because capacitance between the breadboard tracks and jumper wires can degrade the measurement accuracy.
* Use short traces at the input to minimize series inductance.
The hardware recommended for this test setup (Figure 2) includes an Agilent 4395A network analyzer, a Mini-Circuits ZFRSC-2050 power splitter, and a Tektronix P6245 active FET probe.
Figure 2: Test setup for measuring op-amp input capacitance.
First, calibrate the setup with no op amp installed on the PCB. From the resulting Bode plot, you can calculate stray capacitance as Equation 1:
where f1(-3 db) is the corner frequency as measured on the network analyzer with no op amp installed, and RTH1 is the Thevenin-equivalent series resistance. RTH1 is a function of the inserted series resistor, the input termination resistance (50Ω), and the source impedance at the power splitter (50Ω), Equation 2:
Next, install the op amp on the PCB. Since the board’s stray capacitance is in parallel with the op amp’s input capacitance, Equation 1 becomes Equation 3:
where f2(-3 db) is the corner frequency as measured on the spectrum analyzer with the op amp installed, and RTH2 is the Thevenin-equivalent series resistance.
This Thevenin equivalent resistance is a function of the inserted series resistor, the input termination resistance (50Ω), output impedance of the power splitter (50Ω), and the common mode input impedance of the op amp, Equation 4:
The input common mode impedance of an op amp is not accurately known. For a CMOS-input op amp, however, it is fairly easy to select RSERIES << RCM. Then RTH2 ≈ RTH1, and Equation 3 can be rewritten as Equation 5
You can now calculate the op amp’s input capacitance from Equations 1 and 5, and verify the value by repeating the experiment with two different values of series resistor.
To illustrate the method, consider an input-capacitance measurement for the MAX4238 op amp.
Figure 3 shows the amplitude response from Figure 2 using a 200 kΩ series resistor and no op amp installed on the PCB, and Figure 4 shows the amplitude response with the MAX4238 installed.
Figure 3: Amplitude response from Figure 2, with RSERIES = 200 kΩ and no op amp installed on the PCB. The f1(-3dB) frequency is indicated by the downward-pointing arrow.
Figure 4: Amplitude Response from Figure 2, with RSERIES = 200 kΩ and the MAX4238 op amp installed. The f2(-3dB) frequency is indicated by the downward-pointing arrow.
Table 1 summarizes the results, using the frequency-response waveforms and calculations from Equations 1 and 5. As a sanity check, the measurement was repeated with a different series resistor value to demonstrate that a similar result (»4pF) is obtained.
Table 1: Summary of MAX4238 input-capacitance measurements
Monday, May 9, 2011
Testing times for LTE – can it co-exist with 2 and 3G systems?
The global telecommunications market is witnessing a paradigm shift in demand as mobile data revenues continue to surpass voice-based revenues in most western countries, accelerating the transition to technologies such as LTE. Mobile networks are witnessing fast-paced development as operators go the extra mile to cater to the changing communication and entertainment needs of their subscribers. New LTE networks will utilize 3G technologies as the underlying infrastructure where no LTE service is yet provided, so testing handovers between different radio access technologies is becoming ever more important. The consequence of inadequate response times can be slow (or no) handover, and poor user experience such as dropped connections.
The article expands on this industry background, including the introduction of voice over LTE, and defines and describes the various scenarios for Inter-RAT (Radio Access Technology) handover, including the proposed alternative fall-back scenarios for voice.
What’s the need for LTE anyway?
The global telecommunications market is witnessing a paradigm shift in demand as mobile data revenues surpass voice-based revenues in most western countries. Wireless network operators are focusing on expanding revenues in broadband services, where wireless technologies are perceived by users in the light of broadband wired access. The rapid growth of mobile broadband is driven by demand for the latest devices, applications and services, which enable users to access any type of content on the move. Mobile broadband will also facilitate economic benefits, especially in countries lacking fixed-line broadband infrastructure. This is driving up data usage on mobile networks at a tremendous rate, and operators need to respond with bandwidth availability, which in turn provides the driving force behind the development of evolved 3G and 4G systems such as HSPA and LTE.
Long Term Evolution (LTE) is the project name given by 3GPP to the evolution of the UMTS 3G radio standards. The work on enhancing the original UMTS Terrestrial Radio Access (UTRA) continues in Release 8 of the 3GPP standards with enhancements to High Speed Packet Access (HSPA), but in addition Release 8 includes LTE, or to give it its formal name, Evolved UMTS Terrestrial Radio Access (E-UTRA). Offering higher data rates and lower latency for the user, a simplified all-IP network for the operator and improved spectral efficiency, E-UTRA – or LTE – promises to provide many benefits.
LTE as part of the cellular infrastructure
LTE is an all-IP system, designed primarily to provide high-speed data services. Therefore, during network build-out, and until operators choose to implement IP-based voice services, LTE networks will utilize 2G and 3G as an underlying infrastructure for voice calls, and for data services where no LTE service is yet provided. In normal operation, the mobile device (user equipment, or UE) is required to scan for neighbor cells and make measurements which are used as a basis for cell selection and handover decisions. Such processes are very demanding for today’s UEs, which must also multi-task a large number of other applications, making heavy demands on processor power. The consequence of inadequate UE response times can be slow (or no) handover, and poor user experience such as dropped connections and frozen applications.
Industry research predicts that LTE is likely to experience its most rapid growth from 2012, when the majority of operators launch their networks and a unified approach to delivering voice communications and rich services such as video telephony over LTE become available.
Because LTE coverage will not be pervasive, testing handover capability between different radio access technologies (RAT) is critically important in the verification of UE performance. For a positive end-user experience, UEs need to transition smoothly between these RATs, leading operators to increase their focus on testing the real-world performance of each device before deployment on their networks. Such performance testing goes well beyond the more traditional conformance tests defined by the industry’s standards bodies.
Consider two aspects of testing through the lifecycle:
* Conformance – necessary but not sufficient for deployment
* Performance – reflects real use cases. (e.g., measuring maximum data throughput, battery drain under different conditions)
Conformance test might be taken as an industry requirement – ensuring the UE supports a level of functionality and does not cause a problem on the system or to other users – where performance test gives the UE manufacturer the opportunity to differentiate their device based on better user experience: application speed, battery life and generally how the UE fulfills expectations. Inter-RAT handovers are part of both, and assume different importance depending on what the UE is currently doing. If it’s idle (not using network resources), conformance issues are the main concern. If, however, the user has a data-hungry application active, performance issues become much more important. In idle mode, network selection decisions are made mainly by the UE, and transmitted to the network. Where the UE has an active data connection, the network will decide the transmission channel, based on its own measurements and neighbor cell measurement data returned from the UE. See Figure 1.
A second criterion for Inter-RAT handover is the need for a voice service. As previously mentioned, LTE is a packet-only service, with no provision for the circuit-switched voice connection that is normal in earlier systems. Until operators make the additional network equipment investment required to support voice in LTE, making or receiving a voice call will not be part of an LTE service. Meantime, many operators are investing in LTE alongside existing voice networks which offer more extensive coverage. In this scenario it makes sense to use the LTE connection for data and the existing network for voice.
Information is shared by www.irvs.info
The article expands on this industry background, including the introduction of voice over LTE, and defines and describes the various scenarios for Inter-RAT (Radio Access Technology) handover, including the proposed alternative fall-back scenarios for voice.
What’s the need for LTE anyway?
The global telecommunications market is witnessing a paradigm shift in demand as mobile data revenues surpass voice-based revenues in most western countries. Wireless network operators are focusing on expanding revenues in broadband services, where wireless technologies are perceived by users in the light of broadband wired access. The rapid growth of mobile broadband is driven by demand for the latest devices, applications and services, which enable users to access any type of content on the move. Mobile broadband will also facilitate economic benefits, especially in countries lacking fixed-line broadband infrastructure. This is driving up data usage on mobile networks at a tremendous rate, and operators need to respond with bandwidth availability, which in turn provides the driving force behind the development of evolved 3G and 4G systems such as HSPA and LTE.
Long Term Evolution (LTE) is the project name given by 3GPP to the evolution of the UMTS 3G radio standards. The work on enhancing the original UMTS Terrestrial Radio Access (UTRA) continues in Release 8 of the 3GPP standards with enhancements to High Speed Packet Access (HSPA), but in addition Release 8 includes LTE, or to give it its formal name, Evolved UMTS Terrestrial Radio Access (E-UTRA). Offering higher data rates and lower latency for the user, a simplified all-IP network for the operator and improved spectral efficiency, E-UTRA – or LTE – promises to provide many benefits.
LTE as part of the cellular infrastructure
LTE is an all-IP system, designed primarily to provide high-speed data services. Therefore, during network build-out, and until operators choose to implement IP-based voice services, LTE networks will utilize 2G and 3G as an underlying infrastructure for voice calls, and for data services where no LTE service is yet provided. In normal operation, the mobile device (user equipment, or UE) is required to scan for neighbor cells and make measurements which are used as a basis for cell selection and handover decisions. Such processes are very demanding for today’s UEs, which must also multi-task a large number of other applications, making heavy demands on processor power. The consequence of inadequate UE response times can be slow (or no) handover, and poor user experience such as dropped connections and frozen applications.
Industry research predicts that LTE is likely to experience its most rapid growth from 2012, when the majority of operators launch their networks and a unified approach to delivering voice communications and rich services such as video telephony over LTE become available.
Because LTE coverage will not be pervasive, testing handover capability between different radio access technologies (RAT) is critically important in the verification of UE performance. For a positive end-user experience, UEs need to transition smoothly between these RATs, leading operators to increase their focus on testing the real-world performance of each device before deployment on their networks. Such performance testing goes well beyond the more traditional conformance tests defined by the industry’s standards bodies.
Consider two aspects of testing through the lifecycle:
* Conformance – necessary but not sufficient for deployment
* Performance – reflects real use cases. (e.g., measuring maximum data throughput, battery drain under different conditions)
Conformance test might be taken as an industry requirement – ensuring the UE supports a level of functionality and does not cause a problem on the system or to other users – where performance test gives the UE manufacturer the opportunity to differentiate their device based on better user experience: application speed, battery life and generally how the UE fulfills expectations. Inter-RAT handovers are part of both, and assume different importance depending on what the UE is currently doing. If it’s idle (not using network resources), conformance issues are the main concern. If, however, the user has a data-hungry application active, performance issues become much more important. In idle mode, network selection decisions are made mainly by the UE, and transmitted to the network. Where the UE has an active data connection, the network will decide the transmission channel, based on its own measurements and neighbor cell measurement data returned from the UE. See Figure 1.
A second criterion for Inter-RAT handover is the need for a voice service. As previously mentioned, LTE is a packet-only service, with no provision for the circuit-switched voice connection that is normal in earlier systems. Until operators make the additional network equipment investment required to support voice in LTE, making or receiving a voice call will not be part of an LTE service. Meantime, many operators are investing in LTE alongside existing voice networks which offer more extensive coverage. In this scenario it makes sense to use the LTE connection for data and the existing network for voice.
Information is shared by www.irvs.info
Saturday, May 7, 2011
Battery-less, RF-powered energy harvesting wireless sensor system targets building and industrial automation
Powercast Corporation has unveiled its Lifetime Power Wireless Sensor System for wireless environmental monitoring in HVAC control and building automation. Remote radio frequency (RF) transmitters broadcast RF energy that perpetually powers wireless sensor nodes without batteries or wires.
The Powercast wireless sensor system is composed of three parts: a family of battery-less sensor nodes embedded with Powercast's Powerharvester receivers so they may be powered wirelessly, a WSG-101 Building Automation System (BAS) Gateway, and Powercast's TX91501 Powercaster transmitter.
The first of the family of sensor devices being made available is the WSN-1001 Wireless Temperature and Humidity Sensor. Additional wirelessly-powered sensors to measure CO2, pressure, light level, motion and other conditions will follow shortly.
In the wireless powering system, a single transmitter or a network of them power multiple sensor devices. Powerharvester receivers embedded inside the sensor nodes receive RF energy up to 60 to 80 feet away from the Powercaster transmitters broadcasting radio waves at 915 MHz (a frequency commonly used in industrial and consumer devices). The receivers then convert the RF energy into DC current to power the sensors wirelessly, similar to RFID, but with longer range and greater functionality.
Broadcasted RF energy can reach and power sensors even through walls, above ceilings, and behind objects, and provides a reliable and predictable energy source as opposed to pure ambient energy-harvesting technologies such as indoor solar, thermal, or vibration.
The WSG-101 BAS Gateway can scale up to 100 sensor nodes and 800 sensor points for large-scale deployment. The gateway interfaces to wired BAS networks via industry-standard protocols, including BACnet, Modbus, Metasys, and LonWorks. The sensors and gateway communicate wirelessly at 2.4 GHz using 802.15.4 radios which users can set to channels that will not interfere with, nor be interfered by, Wi-Fi networks.
Powercast estimates that a battery-less, wirelessly-powered sensor system could save users 40 to 50 percent over the installed cost of wired sensors and controllers for building automation systems, and can also eliminate the future and repeated maintenance cost of battery replacement and disposal.
Powercast developed its wireless sensor system using the company's core RF transmitter (Powercaster) and receiver (Powerharvester) energy-harvesting technology, and the other devices included in Powercast's P2110-EVAL-01 Energy Harvesting Development Kit for Wireless Sensors. The receiver embedded in the sensor nodes is based on Powercast's award-winning P2110 Powerharvester receiver, which is available for OEMs to design into their own products.
Information is shared by www.irvs.info
The Powercast wireless sensor system is composed of three parts: a family of battery-less sensor nodes embedded with Powercast's Powerharvester receivers so they may be powered wirelessly, a WSG-101 Building Automation System (BAS) Gateway, and Powercast's TX91501 Powercaster transmitter.
The first of the family of sensor devices being made available is the WSN-1001 Wireless Temperature and Humidity Sensor. Additional wirelessly-powered sensors to measure CO2, pressure, light level, motion and other conditions will follow shortly.
In the wireless powering system, a single transmitter or a network of them power multiple sensor devices. Powerharvester receivers embedded inside the sensor nodes receive RF energy up to 60 to 80 feet away from the Powercaster transmitters broadcasting radio waves at 915 MHz (a frequency commonly used in industrial and consumer devices). The receivers then convert the RF energy into DC current to power the sensors wirelessly, similar to RFID, but with longer range and greater functionality.
Broadcasted RF energy can reach and power sensors even through walls, above ceilings, and behind objects, and provides a reliable and predictable energy source as opposed to pure ambient energy-harvesting technologies such as indoor solar, thermal, or vibration.
The WSG-101 BAS Gateway can scale up to 100 sensor nodes and 800 sensor points for large-scale deployment. The gateway interfaces to wired BAS networks via industry-standard protocols, including BACnet, Modbus, Metasys, and LonWorks. The sensors and gateway communicate wirelessly at 2.4 GHz using 802.15.4 radios which users can set to channels that will not interfere with, nor be interfered by, Wi-Fi networks.
Powercast estimates that a battery-less, wirelessly-powered sensor system could save users 40 to 50 percent over the installed cost of wired sensors and controllers for building automation systems, and can also eliminate the future and repeated maintenance cost of battery replacement and disposal.
Powercast developed its wireless sensor system using the company's core RF transmitter (Powercaster) and receiver (Powerharvester) energy-harvesting technology, and the other devices included in Powercast's P2110-EVAL-01 Energy Harvesting Development Kit for Wireless Sensors. The receiver embedded in the sensor nodes is based on Powercast's award-winning P2110 Powerharvester receiver, which is available for OEMs to design into their own products.
Information is shared by www.irvs.info
Thursday, May 5, 2011
Functional safety poses challenges for semiconductor design
To manage systematic and random failures, vendors have applied functional safety techniques at the system level for decades. As the capability to integrate multiple system-level functions into a single component has increased, there’s been a desire to apply those same practices at the semiconductor component or even subcomponent level.
Although the state of the art in functional safety is not yet well aligned with the state of the art in semiconductors, recent work on the IEC 61508 second edition and ISO 26262 draft standards have brought improvements. Many challenges remain, however.
Texas Instruments and Yogitech, a company that verifies and designs mixed-signal system-on-chip solutions, are working together to solve the challenges in standards committees as well as on new TMS570 microcontroller designs. (See Figure 1 below for an example of current-generation designs.
Standards and analysis
All elements that interact to realize a safety function or contribute to the achievement of a semiconductor safety goal must be considered. Regrettably, the available standards aren’t consistent in application or scope. For example, IEC 61508 makes a general distinction between system design and software design, while ISO 26262 respects separate system, hardware component and software component developments.
So how should we consider reusable subcomponent modules such as an analog/digital converter or processor core? “Hard” modules, such as A/D converters, have a fixed implementation and can easily be developed according to hardware component guidelines. “Soft” modules, such as processor cores, are delivered as source code and have no physical implementation until synthesized into a design.
It is not possible to perform some levels of quantitative safety analysis on the “soft” module until it’s synthesized, as it blurs the line between hardware and software components. Trial synthesis with well-documented targets is thus recommended to allow for the calculation of reference safety metrics, so that potential users can evaluate a module’s suitability for their design.
To ensure functional safety, it is critical to understand the probability of random failure of the elements that constitute a safety function or that achieve a safety goal. In traditional analysis, each component in the safety function is typically treated as a black box, with a monolithic failure rate.
Traditional failure rates are estimated based on reliability models, handbooks, field data and experimental data. Those methods often generate wildly different estimates; deltas of 10x to 1,000x are not uncommon. Such variation can pose significant system integration hurdles.
How can you perform meaningful quantitative safety analysis without component failure rates estimated to the same assumptions? One solution is to standardize estimation of failure rates based on a single generic model, such as the one presented in IEC TR 62380 (currently considered in the ISO 26262 draft guidelines). Another is to focus on ratiometric safety analyses—calculations that compare ratios of detected faults to total faults, instead of focusing on the absolute magnitude of failure rate.
This information is shared by www.irvs.info
Although the state of the art in functional safety is not yet well aligned with the state of the art in semiconductors, recent work on the IEC 61508 second edition and ISO 26262 draft standards have brought improvements. Many challenges remain, however.
Texas Instruments and Yogitech, a company that verifies and designs mixed-signal system-on-chip solutions, are working together to solve the challenges in standards committees as well as on new TMS570 microcontroller designs. (See Figure 1 below for an example of current-generation designs.
Standards and analysis
All elements that interact to realize a safety function or contribute to the achievement of a semiconductor safety goal must be considered. Regrettably, the available standards aren’t consistent in application or scope. For example, IEC 61508 makes a general distinction between system design and software design, while ISO 26262 respects separate system, hardware component and software component developments.
So how should we consider reusable subcomponent modules such as an analog/digital converter or processor core? “Hard” modules, such as A/D converters, have a fixed implementation and can easily be developed according to hardware component guidelines. “Soft” modules, such as processor cores, are delivered as source code and have no physical implementation until synthesized into a design.
It is not possible to perform some levels of quantitative safety analysis on the “soft” module until it’s synthesized, as it blurs the line between hardware and software components. Trial synthesis with well-documented targets is thus recommended to allow for the calculation of reference safety metrics, so that potential users can evaluate a module’s suitability for their design.
To ensure functional safety, it is critical to understand the probability of random failure of the elements that constitute a safety function or that achieve a safety goal. In traditional analysis, each component in the safety function is typically treated as a black box, with a monolithic failure rate.
Traditional failure rates are estimated based on reliability models, handbooks, field data and experimental data. Those methods often generate wildly different estimates; deltas of 10x to 1,000x are not uncommon. Such variation can pose significant system integration hurdles.
How can you perform meaningful quantitative safety analysis without component failure rates estimated to the same assumptions? One solution is to standardize estimation of failure rates based on a single generic model, such as the one presented in IEC TR 62380 (currently considered in the ISO 26262 draft guidelines). Another is to focus on ratiometric safety analyses—calculations that compare ratios of detected faults to total faults, instead of focusing on the absolute magnitude of failure rate.
This information is shared by www.irvs.info
Monday, May 2, 2011
Quantenna debuts 802.11n 4x4 MIMO wireless video bridge ref design
The QHS600 was announced (see EE Times story in October 2008 and is a single-chip, 5-GHz solution capable of 600-Mbit/s connections that integrates baseband, media access control (MAC) and four RF transceivers, along with their respective power amplifiers, low-noise amplifiers and Tx/Rx switches.
However, according to David French, CEO of Quantenna, "Of more value is the digital beamforming hardware with on-chip DSP doing real-time characterization of the Wi-Fi channel to perform [signal] steering on a packet-by-packet basis." The DSP is an ARC 4 and in all there are 14 patents around this area, he added. Other features include concurrent dual-band mode and mesh networking. According to French, it should be under $10 by the end of 2010.
The newly announced reference design kit (RDK), dubbed the QHS600x, consists of a radio GMII module connected via a mPCI connector to a host adapter board and enables straightforward board boot, bring up and program execution. It also includes integrated functions for various I/O interfaces, including gigabit Ethernet, DDR SDRAM, flash, USB 2.0 OTG, ARM JTAG debug port, GPIOs, serial port and four antenna ports.
Also included is a complete software developer's kit (SDK) that implements the entire networking and device discovery/connectivity functionality required for a wireless video bridge module supporting the 802.11n standard. Additionally, Quantenna and LitePoint have worked together to provide a full test suite of specialized video-over-wireless radio frequency (RF) calibration and performance-characterization software.
According to French, Quantenna will introduce a video bridge at CES and he sees as strong play for its technology with service providers. The company already has a deal with Swisscom, one of its backers, as well as eight other providers, unnamed as yet.
Information is shared by www.irvs.info
However, according to David French, CEO of Quantenna, "Of more value is the digital beamforming hardware with on-chip DSP doing real-time characterization of the Wi-Fi channel to perform [signal] steering on a packet-by-packet basis." The DSP is an ARC 4 and in all there are 14 patents around this area, he added. Other features include concurrent dual-band mode and mesh networking. According to French, it should be under $10 by the end of 2010.
The newly announced reference design kit (RDK), dubbed the QHS600x, consists of a radio GMII module connected via a mPCI connector to a host adapter board and enables straightforward board boot, bring up and program execution. It also includes integrated functions for various I/O interfaces, including gigabit Ethernet, DDR SDRAM, flash, USB 2.0 OTG, ARM JTAG debug port, GPIOs, serial port and four antenna ports.
Also included is a complete software developer's kit (SDK) that implements the entire networking and device discovery/connectivity functionality required for a wireless video bridge module supporting the 802.11n standard. Additionally, Quantenna and LitePoint have worked together to provide a full test suite of specialized video-over-wireless radio frequency (RF) calibration and performance-characterization software.
According to French, Quantenna will introduce a video bridge at CES and he sees as strong play for its technology with service providers. The company already has a deal with Swisscom, one of its backers, as well as eight other providers, unnamed as yet.
Information is shared by www.irvs.info
Subscribe to:
Posts (Atom)