IRVS VLSI IDEA INNOVATORS

IRVS VLSI IDEA INNOVATORS
VLSI Project, Embedded Project, Matlab Projects and courses with 100% Placements

Monday, June 27, 2011

Crash Course on VLSI, Embedded & Matlab



Visit us: http://www.irvs.info

Friday, June 24, 2011

Basics of SoC I/O design:The building blocks

The integration of analog with digital and the increase number of on-chip features in mixed-signal controllers demand more complex I/O structures. Though they are sometimes some of the most neglected features of a chip, I/O (Input / Output pins) can represent a great deal of functionality in a SoC (System on Chip).

The I/O structure in today’s SoC is so feature-rich that a full understanding of their capabilities is important to understanding how to do more effective system design, and achieving greater value from the SoC.

In this two part article, we will discuss the following:

* basic understanding of the structure of an I/O block in any digital device

* different specifications of pin, which need to be understood, while selecting the device for application

* different variants of configurations of I/O block which need to be used for different application requirements

* choosing the particular configuration that will achieve both reduced BOM cost and improved system performance

Drive modes

Drive mode is the way the pin is driven based on its output/input state. In this section we will look at some of the drive modes generally used in a generic System on Chip. When it comes to drive modes, it is mainly about digital, as high impedance is the only drive mode used for analog apart from some exceptions. These drive modes can be named differently by different SoC manufacturers but can be recognized easily by looking at their I/O architecture. If these drive modes are used appropriately, it will help to yield better system integration and reduce BOM cost. Let us look at the very basic output stage of an I/O cell.

Basic digital output cell: As shown below in Figure 1 below, the output driver available in most of the controllers. This drive mode may be known as strong or CMOS drive mode in different controllers.

If we look at it closely, it is nothing but the inverter which has its input controlled by a register bit generally called the data register in. (The reason it is called strong is that the CMOS inverter drives both ‘1’ and ‘0’ at strong levels).



Figure 1: CMOS drive mode (CMOS inverter)

All other drive modes are nothing but slight variation of this CMOS inverter to achieve different system topologies. Let us look into these variations.

Resistive Pull up/Down: This drive mode helps to reduce BOM in most of the applications so we are discussing it at first. In resistive pull up/ pull down mode, a resistance is introduced between the drain of MOS transistors and pin pad (Figure 2 below).



Figure 2: Resistive Pull up/ Pull down drive mode

So, it limits the current flowing through the pin and serves the same purpose as any other external pull up/ pull down resistance does. In applications, where a switch needs to be interfaced, a pull up / pull down resistance is needed to keep the input at a defined logic.

This pulling up/down of the pin can provide a stable default state and thus avoid random fluctuation that could occur due to noise. Now, the resistance internal to GPIO cell can be used for this purpose in a resistive pull up/ down mode. (Figure 3 below).






Figure 3: Use of internal pull up resistance to interface switch


Also, there are cases in communication protocols where the pins act as bidirectional interfaces. In such an instance we tend to use external pull up/ pull down resistors.

One point worth to be noted is, generally these internal resistances are very inaccurate. So, they cannot be used in case precision is one of requirement.

Information is shared by www.irvs.info

Thursday, June 23, 2011

RF in Action: Wireless switches offer unlimited benefits

Limit switches have been around for decades, protecting heavy equipment and providing important position information. They are used in everything from crane booms to gates, lifts to storage tanks — anywhere there is a need to sense the presence, absence or position of a moving object. In a crane application, the limit switch is located on the end of the boom. The limit switch could be used to indicate to the operator when the cable jib is close to the end of the boom and it is not safe to spool the cable further.



In the last few years, limit switches have become enabled by wireless using technologies such as IEEE 802.15.4 to transmit information from the remote switch to a receiver, which then converts the signal to ones used by standard controllers. Converting switch solutions to a wireless mode addresses a variety of customer needs for lowered cost and increased limit switch installation options, giving early adopters a competitive advantage in the design of next generation industrial and transportation equipment.

Benefits of Wireless Switching

Wireless limit switches can lower equipment costs in a variety of ways. For one, the cost of manufacturing and installation is reduced. Not only is the expense of wiring eliminated, there are no conduits, clips or connectors required to place a limit switch where it is needed. There are no wire routing problems to solve, no need for pulling wire during installation and fewer restrictions on location and placement of the limit switch.

Wireless limit switches can also reduce maintenance costs. Equipment wiring is less complex with the elimination of wired switches from the mix, simplifying troubleshooting and reducing commissioning time. Further, going wireless increases system reliability by eliminating the potential for having continuity issues with switch wiring or connectors. Switches also become simpler to replace, with no need to disconnect and re-attach wiring and no risk of incorrect wire attachment.

Global limit switches are an essential element of industrial and transportation controls, monitoring position and presence of doors, booms and valves. Conventional wired switches, however, present installation and maintenance challenges, especially in installations that are subject to harsh environments or involve frequent flexing in the wiring. In some cases, traditional wires can represent tripping hazards or can be compromised during normal equipment operation, thus causing expensive machine down-time.

Information is shared by www.irvs.info

Tuesday, June 21, 2011

Using MCAPI to lighten an MPI load

High-performance computing (HPC) relies on large numbers of computers to get a tough job done. Often, one computer will act as a master, parceling out data to processes that may be located anywhere in the world. The Message Passing Interface (MPI) provides a way to move the data from one place to the next.

Normally, MPI would be implemented once in each server to handle the messaging traffic. But with multicore servers using more than a few cores, it can be very expensive to use a complete MPI implementation because MPI would have to run on each core in the computer in an asymmetric multi-processing (AMP) configuration. The Multicore Communications API (MCAPI), on the other hand – a protocol designed with embedded systems in mind – is a much more efficient way to move MPI messages around within the computer.

Heavyweight champion

MPI was designed for HPC and is a well-established protocol that is robust enough to handle the problems that might be encountered in a dynamic network of computers. For example, such networks are rarely static. Whether it’s due to updates, maintenance, the purchase of additional machines, or even the simple fact that there is a physical network cable that can be inadvertently unplugged, MPI must be able to handle the eventuality of the number of nodes in the network changing. Even with a constant number of servers, those servers run processes that may start or stop at any time. So MPI includes the ability to discover who’s out there on the network.

At the programming level, MPI doesn’t reflect anything about computers or cores. It knows only about processes. Processes start at initialization, and then this discovery mechanism builds a picture of how the processes are arranged. MPI is very flexible in terms of how the topology can be created, but, when everything is up and running, there is a map of processes that can be used to exchange data. A given program can exchange messages with one process inside or outside a group or with every process in a group. The program itself has no idea whether it’s talking to a computer next to it or one on another continent.

So a program doesn’t care whether a computer running a process with which it’s communicating is single-core or multicore, homogeneous or heterogeneous, symmetric (SMP) or asymmetric (AMP). It just knows there’s a process to which it wants to send an instant message. It’s up to the MPI implementation on the computer to ensure that the messages get through to the targeted processes.

Due to the architectural homogeneity of SMP multicore, this is pretty simple. A single OS instance runs over a group of cores and manages them as a set of identical resources. So a process is naturally spread over the cores. If the process is multi-threaded, then it can take advantage of the cores to improve computing performance; nothing more must be done.

However, SMP starts to bog down with more cores because bus and memory access bog down. For computers that are intended to help solve big problems as fast as possible, it stands to reason that more cores in a box is better, but only if they can be utilized effectively. To avoid the SMP limitations, we can use AMP instead for larger-core-count (so-called “many-core”) systems.

With AMP, each core (or different subgroups of cores) runs its own independent OS instance, and some might even have no OS at all, running on “bare metal.” Because a process cannot span more than one OS instance, each OS instance – potentially each core – runs its own processes. So, whereas an SMP configuration can still look like one process, AMP looks like many processes – even if they’re multiple instances of the same process.

Configured this way, each OS must run its own instance of MPI to ensure that its processes are represented in the network and get fed any messages coming their way. The issue is the fact that MPI is a heavyweight protocol as a result of the range of things it must handle on a network. The environment connecting the cores within a closed box – or even on a single chip – is much more limited than the network within which MPI must operate. It also typically has far fewer resources than a network does. So MPI is over-provisioned for communication within a server (see sidebar).

Assisted by a featherweight

Unlike MPI, the Multicore Association specifically designed the MCAPI specification to be lightweight so that it can handle inter-process communication (IPC) in embedded systems, which usually have considerably more limited resources. While MCAPI works differently from MPI, it still provides a basic, simple means of getting a message from one core to another. So we can use MCAPI to deliver MPI functionality much more inexpensively within a system that has limited resources but also more limited requirements.

There are two possible ways to approach bring MCAPI into an MPI design. The first way works if the program using MPI utilizes very few MPI constructs – more or less just sending and receiving simple messages. The idea is to designate one “master” core within the server to run a full-up MPI service plus a translator for all other “accelerator” cores in the box. The accelerator cores will run MCAPI instead of MPI. This means that MPI messages will run between the servers, but MCAPI messages will run between the cores inside the server see figure 1.



Fig 1: MPI messages will run between the servers, but MCAPI messages will run between the cores inside the server.


For those program instances running on the accelerator cores, you then replace the MPI calls with the equivalent MCAPI calls – which is why this works only for simpler uses of MPI, since many MPI constructs have no MCAPI equivalents. A translator converts any messages moving between the MPI and MCAPI domains - see figure 2.

The cost of this arrangement lies in the fact that the program must be edited and recompiled to use MCAPI instead of MPI for the accelerator cores. This also complicates program maintenance due to the existence of two versions of the program – one using MPI and one using MCAPI.



Information is shared by www.irvs.info

Friday, June 17, 2011

An autonomous wireless sensor network for space applications

The importance of wireless sensor networks for space missions is shown in multiple applications, such as assembly, equipment integration, thermal and vibrations test phases in order to monitor the satellite. The high number of sensors required in space applications underscores the need for wireless sensors that save time during integration due to their simpler links and connections.

ASTRAL, which stands for Autonomous System and TRAnsmission wireLess sensor network for space applications, is designed to develop a demonstrator to validate the concept of adapting wireless technology for space missions. The result was a demonstrator for monitoring vibration in satellites on the ground and during the launch over approximately 30 minutes.

The project is financed by the French Research Foundation for Aeronautics and Space (FNRAE). The partners of this project are EADS-ASTRIUM, a wholly owned subsidiary of EADS, a global leader in aerospace, defense and related services; 3D Plus, a worldwide supplier of advanced high-density 3D microelectronic products, and the CEA, a French government-funded technological research organization.

The architecture
Arranged in a topology of a star network (Figure 1), the sensor network is composed of a single master node and multiple slave nodes. It employs two strategies of data transmission: a direct transmission of rough data with sample rate up to 2 KHz, using 5 slaves, and local calculation and authorizing high sample rates up to 20 kHz.



A flexible architecture has been designed allowing master/slave reversibility. The basic architecture is the same for the master and the slaves (Figure 2). The nature of the node is defined by simple programming. The low optimized consumption was taken into account in the architecture design.


Figure 2: Architecture

The system is implemented on a single printed circuit board (PCB) designed either to be cut off and stacked using the 3D Plus technology in the case of slave nodes, or kept in one board for the master node. Three parts are visible on this PCB (Figure 3). The left part shows the analog part with the sensor associated with its electronic components, the antenna, the RF transceiver and the power-supply monitoring. The central part shows the numerical part with the processor and the analog-to-digital ADC converter, and the right part is dedicated to the master node with serial links and contacts for testing.



Figure 3: Printed circuit board (PCB)

Information is shared by www.irvs.info

Thursday, June 16, 2011

Dodging counterfeit electronic components is far more difficult than in the past

A counterfeit electronic component operating in an electronics system may make itself known when the system experiences an unexpected failure. The failure may be relatively innocuous—a monitoring device that suddenly begins to display meaningless numbers—or it may be directly life-threatening, such as a functional failure in a defibrillator. Even after the failure has occurred, the failed component may not be recognized as counterfeit unless it’s inspected for that purpose.



Due to the nature and complexity of the global electronics component supply system, it’s fairly easy for counterfeit components to be unknowingly purchased by practically any system assembler. The ways in which counterfeits are produced, and the rapidly increasing skill of the counterfeiters in disguising their bogus components. make the problem even more severe.

Many counterfeit components find their way into the inventories of independent distributors who fill the critical role of supplying manufacturers with new components that are either obsolete, allocated, or on long lead-times from the factory. To protect their customers from the increasing counterfeit threat, some distributors have begun thorough incoming inspection processes to detect counterfeits and remove them from the supply chain. As the quantity of counterfeits has grown, and as counterfeiters have become more sophisticated, this effort has grown into a sizable laboratory in some cases.

The vast majority of counterfeit electronic components are plastic-encapsulated microcircuits (PEMs) that began life on a previously used circuit board that was ultimately scrapped, and probably within a western country. When the electronic equipment is discarded, their boards are harvested and shipped in vast quantities to China. Trucks haul the export containers from the docks of Hong Kong harbor to the town of Shantou where most of the component harvesting and counterfeit processing is performed within mainland China.

All of the ones that look the same go into the same pile. Aside from the fact that some of the components are unquestionably dead electronically at this point, a single pile may contain components having different revision codes, or even different functions. But every component in a pile will get the same new matching part-marking.The purpose of all of this work is to make each component as cosmetically similar as possible to the new component that it’s impersonating.

The point at which these counterfeits have value is at the moment when they are sold to a buyer as factory-new components. At that time, the counterfeiter’s work is, so to speak, finished. If he is selling, for example, what purports to be a reel of PQFPs made by vendor ABC, he will also counterfeit, or have someone else counterfeit, the reel and its labels. What the buyer examines will appear to be a new reel that holds new vendor ABC PQFPs.

If the counterfeiter is worried that his cosmetic work may not be quite up to standards, he may go out and purchase a few genuine vendor ABC PQFPs, and put them at the beginning, middle and ends of the reel. The sharp-eyed buyer who examines the first or last 30 or 40 components on the reel will be satisfied. The remaining 99% of the reel, however, holds counterfeits.

Information is shared by www.irvs.info

Wednesday, June 15, 2011

Piezoelectric fans and their application In electronics cooling

Piezoelectric fans seem to represent an example of research and development that has culminated in a product that is deceptively simple. Although piezoelectric technology is capable of producing rotary motion, the fans operate quite differently from rotary fans, as they generate airflow with vibrating cantilevers instead of spinning blades.

Piezoelectric, as derived from Greek root words, means pressure and electricity. There are certain substances, both naturally occurring and man-made, which will produce an electric charge from a change in dimension and vice-versa. Such a device is known as a piezoelectric transducer (PZT), which is the prime mover of a piezoelectric fan. When electric power, such as AC voltage at 60 Hz is applied, it causes the PZT to flex back and forth, also at 60 Hz.



The magnitude of this motion is very tiny, so to amplify it, a flexible shim or cantilever, such as a sheet of Mylar, is attached and tuned to resonate at the electrical input frequency. Since piezoelectric fans must vibrate, they must use a pulsating or alternating current (AC) power source. Standard 120 V, 60 Hz electricity, just as it is delivered from the power company, is ideal for this application, since it requires no conversion.

[If direct current (DC), such as in battery-operated devices, is the power source, then an inverter circuit must be employed to produce an AC output. An inverter may be embodied in a small circuit board and is commercially available with frequency ranges from 50 to 450Hz.]

Driving the fan at resonance minimizes the power consumption of the fan while providing maximum tip deflection. The cantilever is tuned to resonate at a particular frequency by adjusting its length or thickness. The PZT itself also has a resonant frequency, so the simplistic concept of adjusting only the cantilever dimensions to suit any frequency may still not yield optimum performance. (Conceivably, tuning the electrical input frequency to match existing cantilever dimensions may work, though with the same caveat, that the resonant frequencies of all the components must match, within reason.

Applications for piezoelectric fans are just in their infancy and could really thrive through the imagination of designers. This article, which originally appeared in the April 2011 issue of Qpedia (published by Advanced Thermal Solutions, Inc. and used with permission here) explores the principles, construction, implementation, and installation of piezoelectric fans.


Information is shared by www.irvs.info

Monday, June 13, 2011

Performing efficient arctangent approximation

Fast and accurate methods for computing the arctangent of a complex number x = I + jQ have been the subject of extensive study because estimating the angle θ of a complex value has so many applications in the field of signal processing. The angle of x is defined as θ = tan-1(Q/I).

Practitioners interested in computing high speed (minimum computations) arctangents typically use look-up tables where the value Q/I specifies a memory address in programmable read-only memory (PROM) containing an approximation of angle θ.

Those folks interested in enhanced precision implement compute-intensive high-order algebraic polynomials, where Chebyshev polynomials seem to be more popular than Taylor series, to approximate angle θ.

(Unfortunately, because it is such a non-linear function, the arctangent is resistant to accurate reasonable-length polynomial approximations. So we end up choosing the least undesirable method for computing arctangents.)

Here’s another contender in the arctangent approximation race that uses neither look-up tables nor high-order polynomials. We can estimate the angle θ in radians, of x = I + jQ using the following approximation



where –1 ≥Q/I ≤1. That is, θ is in the range –45 to +45 degrees (–Ï€/4 ≥θ≤+Ï€/4 radians). Equation (13–107) has surprisingly good performance, particularly for a 90 degrees (Ï€/2 radians) angle range.

Figure 13–59 below shows the maximum error is 0.26 degrees using Eq. (13–107) when the true angle θ is within the angular range of –45 to +45 degrees. A nice feature of this θ computation is that it can be written as:



eliminating Eq. (13–107)’s Q/I division operation, at the expense of two additional multiplies.



Figure 13–59. Estimated angle theta error in degrees.

Another attribute of Eq. (13–108) is that a single multiply can be eliminated with binary right shifts. The product 0.28125Q2 is equal to (1/4+1/32)Q2, so we can implement the product by adding Q2 shifted right by two bits to Q2 shifted right by five bits.

This arctangent scheme may be useful in a digital receiver application where I2 and Q2 have been previously computed in conjunction with an AM (amplitude modulation) demodulation process or envelope detection associated with automatic gain control (AGC).

We can extend the angle range over which our approximation operates. If we break up a circle into eight 45 degree octants, with the first octant being 0 to 45 degrees, we can compute the arctangent of a complex number residing in any octant. We do this by using the rotational symmetry properties of the arc tangent:



These properties allow us to create Table 13-6 below.




Table 13–6 Octant Location versus Arctangent Expressions

So we have to check the signs of Q and I, and see if | Q | > | I |, to determine the octant location, and then use the appropriate approximation in Table 13–6. The maximum angle approximation error is 0.26 degrees for all octants.

When θ is in the 5th octant, the above algorithm will yield a θ’ that’s more positive than +Ï€ radians. If we need to keep the θ’ estimate in the range of –Ï€ to +Ï€, we can rotate any θ residing in the 5th quadrant +Ï€/4 radians (45 degrees), by multiplying (I +jQ) by (1 +j), placing it in the 6th octant.

That multiplication yields new real and imaginary parts defined as



Then the 5th octant θ’ is estimated using I’ and Q’ with



Information is shared by www.irvs.info

Thursday, June 9, 2011

Not complying with IEC 62304 for software design could be detrimental on many levels

Medical devices have become increasingly sophisticated, now employing software-controlled applications whose failure to function correctly could result in death or serious injury. Despite this increased complexity, medical software standards continue to reflect only the rigor of low-risk applications.



Notably, many of the medical device faults stem from product upgrades. An analysis of 3140 medical device recalls conducted between 1992 and 1998 by FDA reveals that 7.7% are attributable to software failures. Of the software recalls, 79% were caused by software defects introduced after software upgrades.

Reacting to an ongoing inability to manage product upgrades, the FDA recently took punitive action against Baxter Healthcare and their infusion pumps forcing a recall. On April 27, 2010, the FDA had warned users about faulty components in defibrillators manufactured by Cardiac Science Corp. Unable to remedy the problems with software patches, Cardiac Science was forced to replace 24,000 defibrillators that were implicated. As a result, Cardiac Science reported a net loss of $18.5 million. (They were eventually acquired by Opto Circuits)

These recalls have resulted in a change in focus by many medical device providers. Many companies are now changing their approach to improve their software processes as well as to adopt IEC 62304, a standard for design of medical products recently endorsed by the European Union and the United States. IEC 62304 introduces a risk-based compliance structure—Class A through C, where the failure of Class C software could result in death or serious injury—that ensures that medical applications comply with the standards suitable for their risk assessment. This standard outlines requirements for each stage of the development lifecycle and defines the minimum activities and tasks to be performed to provide confidence that the software has been developed in a manner that’s likely to produce highly reliable and safe software products.

IEC 62304 focuses on the software development process, defining the majority of the software development and verification activities. This process includes activities like software development planning, requirement analysis, architectural design, software design, unit implementation and verification, software integration and integration testing, system testing, and finally software release.

Information is shared by www.irvs.info

Monday, June 6, 2011

Security ICs provide device protection for medical equipment manufacturers

Serious consequences can arise when medical equipment is misused, either accidentally or maliciously. Also, the issue of operational and data security is growing increasingly important as system designs become more capable, feature-rich and complex. This is becoming a particularly crucial design factor for products that leverage web connectivity to deliver fast, safe and cost-effective services across the Internet.

To address these issues and also to provide safety features for systems that could potentially be misused, engineers building medical equipment increasingly find it essential to include a security-type device such as a secure IC. They can use robust security solutions to prevent unauthorized devices from being connected to their equipment, protect against ‘man-in-the-middle’ attacks that intentionally modify the data flow between an authorized device and a host system, and control the usage of disposable devices, among other purposes.

A solid M2M (machine-to-machine) authentication solution based on a security IC is essential for any medical system requiring security capabilities. It can ensure safe, normal, uncorrupted equipment operation by providing unequivocal assurance to the embedded control electronics that the system is communicating with a genuine peer system or subsystem, before providing that entity a service or authorizing it to access sensitive data.

The integrity of the authentication service that an M2M solution delivers should also be leveraged whenever the use of the medical equipment requires additional features, such as functions for performing enforcement or applying capability limitations. Of course, the fundamental foundation of tight security that the solution achieves must always be maintained.

Authentication methods use either symmetric or asymmetric algorithms

Authentication schemes are typically implemented with symmetric algorithms such as the DES and AES types, or with asymmetric algorithms such as the RSA type. These methods are differentiated by the complex mathematical manipulations each uses for authentication. Security ICs can typically process both symmetric and asymmetric algorithms, if necessary.

Symmetric algorithms are often preferred because they use small key sizes that enable quick computations, only a few CPU clock cycles per data block. However, since the same key is used to perform both host and device authentication, it is mandatory that the identity of that key be kept secret, safe from theft or duplication by any means.

Asymmetric algorithms are generally stronger. The process of verifying that a device is genuine requires unique private/public keys pairs per device. The private key is known only to the security IC, and the public key is shared to all. This security scheme uses unique large key size and complex mathematic operations. Thus, the authentication process involves a lot of computation, which has the potential to slow down the security function.

Security ICs typically incorporate a cryptographic (crypto) accelerator that speeds up the algorithm processing. Still, the mathematic operations of asymmetric schemes take longer than those of symmetric ones: milliseconds vs. microseconds for a Secure IC.

However, the robust security protection that asymmetric schemes deliver usually must be executed only a few times during the life cycle of the device or system it protects—such as when the device or system connects or reconnects to an external component or system. Therefore, the computational delay is seldom problematic.

Some applications of asymmetric algorithms exploit the method’s strong authentication capabilities for internal purposes, rather than external tasks. For example, asymmetric keys can provide the strong protection for loading or managing symmetric keys in certain types of secure ICs. In such chips, the combination of these security schemes maximizes the protection that the chip delivers without significantly degrading its performance.

Design solutions vary in requirements and protection delivery

Medical system designers can choose from several ways to implement M2M authentication schemes:


* Non-standard, low-security designs built with memory-based authentication solutions—These low-cost, often proprietary solutions are extremely vulnerable to physical attacks because the “secret” key information they require isn’t housed in a tamper-proof device. Also, their key lengths—generally 64 bits—are far shorter than what is required by today’s TDES and AES standards, so they don’t meet the stringent security requirements of medical networks.

* Non-standard, non-robust designs built with software encryption only—Software-only solutions, which require a high-performance main system processor, are vulnerable to abuse because they don’t protect the secret key in a secure and tamper-proof device—a serious vulnerability. Furthermore, if conventional microcontrollers are used to run the algorithms, hackers can easily access the algorithm-processing function and get related data out of it.

* Standards-based solutions built with security-IC technologies—Security-IC solutions provide highly robust hardware protection and cryptographic acceleration. They take advantage of the embedded PKI (Public Key Infrastructure) technologies well proven through their vast global deployment in smart cards. These technologies use standardized types of cryptographic operations such as 3DES (168 bits) and RSA (1024 bits, or the 2048 bits that NIST now recommends). The RSA algorithm, for example, significantly simplifies key management in large systems. The long key lengths and proven tamper-proof IC technology used in security-IC solutions meet FIPS requirements for security-sensitive applications.

Figure 1 summarizes these methods for implementing M2M authentication.



Figure 1. Comparison of security technologies for embedded systems.
(Source: Renesas Electronics America Inc.)


Security-IC-based solutions offer significant advantages for medical product applications. In particular, they deliver robust hardware protection for safely housing secret and private keys, safeguards that are far superior to those of conventional IC solutions.

Conformance and compliance issues for medical product designs

Authentication solutions for medical equipment must address two important design issues: conformance to the ISO 14971 standard and compliance with HIPAA regulations.

Medical products built using standards-based security-IC technologies can meet all applicable security performance requirements. With regard to ISO 14971, conformant designs can be used to prevent introduction of a counterfeit or unauthorized peripheral from entering the supply chain. Also, they provide a mechanism for insuring that a peripheral cannot be used past a pre-determined useful lifespan.

With regard to HIPAA, compliant medical equipment with robust authentication capabilities can mitigate the risks associated with liability, revenue loss, security breaches, device effectiveness and security-level agreements. Furthermore, the high level of protection provided by the security IC can directly address and resolve issues associated with unfair competition, cost of operation, license and band protection, and credibility with business partners and customers.

Information is shared by www.irvs.info

Thursday, June 2, 2011

Energy harvesting tipping point for wireless sensor applications

Ever since the first watermills and windmills were used to generate electricity, energy harvesting has been an attractive source of energy with great potential. In recent years, energy harvesting technology has become more sophisticated and efficient, and energy storage technologies, such as supercapacitors and thin-film batteries (TFBs), have become more cost-effective. Among the final pieces in the energy harvesting solution jigsaw puzzle are integrated circuits that can perform useful functions, such as algorithmic control and wireless communications using tiny amounts of energy. We have now reached a technological tipping point that will result in the evolution of energy-harvesting-based systems from today’s niche products, such as calculators and wrist watches, to their widespread use in building automation, security systems, embedded controls, agriculture, infrastructure monitoring, asset management and medical monitoring systems.

The wireless sensor node is one of the most important product types being forecast for growth as an energy-harvesting solution. Wireless sensors are ubiquitous and very attractive products to implement using harvested energy. Running power to wireless sensors is often neither possible nor convenient, and, since wireless sensor nodes are commonly placed in hard-to-reach locations, changing batteries regularly can be costly and inconvenient. It is now possible to implement wireless sensors using harvested energy because of the off-the-shelf availability of ultra-low-power, single-chip wireless microcontrollers (MCUs) capable of running control algorithms and transmitting data using sophisticated power management techniques.

Low-Power Optimization

Low-power modes on MCUs and wireless transceivers have been optimized in recent years to enable effective power management in wireless sensor applications. Figure 1 illustrates a typical wireless sensor node power cycle.


Figure 1. Wireless Sensor Node Power Cycle


The designer’s objective is to minimize the area under the curve in Figure 1, which corresponds to power consumption. Power consumption can be minimized by optimizing the relative amount of time spent in low-power sleep mode and reducing the active mode time. A fast processing core enables the MCU to execute the control algorithm very quickly, enabling a rapid return to low-power sleep mode and thereby minimizing the power-hungry area under the curve.

Wireless sensor nodes spend most of their time in sleep mode. The only subsystem that stays awake is the real-time clock (RTC). The RTC keeps time and wakes up the wireless sensor node to measure a sensor input. Low-power RTCs typically integrated onto microcontrollers consume only a few hundred nanoamps. It is important to minimize the system’s wake-up time because power is consumed during this time. An RTC uses a free-running counter in the MCU timer subsystem. When the free-running counter rolls over, it generates an interrupt that wakes up the MCU often. If a 32.768 kHz crystal is used, a 16-bit free-running counter rolls over every two seconds and wakes up the MCU. If a wider free-running counter, such as a 32-bit counter, is used, the periodic interrupt occurs less often, and additional power may be conserved.

When a wireless sensor node wakes up, it is usually intended to measure a sensor signal using the analog-to-digital converter (ADC). It is important to note the wake-up time of the ADC as well as the digital wake-up time since there is little point in waking up the CPU very quickly if the ADC takes an order of magnitude longer to wake up. A low-power MCU should wake up both the CPU and the ADC in a couple of microseconds. When the sensor node is awake, the MCU current is typically approximately 160 µA/MHz. When the sensor data has been measured, the algorithm running in the MCU decides whether the data should be transmitted by the radio. To send the data, a low-power ISM band radio consumes somewhat less than 30 mA for only a millisecond or so. When this peak current is averaged out, the overall average current consumption of the wireless sensor node is in the low microampere range.

The radio transmission consumes most of the current in the system. Minimizing the amount of time the radio is on is essential to conserving energy. One way to achieve this is to avoid complicated communications protocols that require the transmission of many bits of data. Steering clear of standards with large protocol overhead is desirable when power is at a premium. It is also important to consider the desired range. Wireless range can be traded for power consumption. An interesting approach to balancing this trade-off is to use dynamic ranging, which allows full-power transmissions when maximum energy is available but reduces the output power level when harvested energy is limited.

Another way to reduce the wireless sensor node’s power consumption is to minimize the number of chips used in the system. Fewer chips on the printed circuit board (PCB) result in lower leakage current losses. Using an MCU that integrates as many functions as possible ultimately helps reduce overall current consumption. If a dc-dc converter is integrated onto the MCU, it can be switched off when the MCU is sleeping. Silicon Labs’ Si10xx wireless MCU, for example, contains an integrated dc-dc converter that allows the system to be powered by a single AAA alkaline battery and still achieve 13 dB output power at the antenna. It has been used successfully in energy harvesting wireless sensor nodes.

Information is shared by www.irvs.info