IRVS VLSI IDEA INNOVATORS

IRVS VLSI IDEA INNOVATORS
VLSI Project, Embedded Project, Matlab Projects and courses with 100% Placements

Thursday, July 28, 2011

Integrated radar front end trims system cost and size

Semiconductors are stepping up to meet the challenge of active automotive safety systems. As crash detection begins to merge with other electronics in the vehicle, such as communications and advanced driver assistance, the automobile is becoming more autonomous and more intelligent. Electronic systems that can act faster than the driver will be able to take control to reduce the severity and frequency of accidents, saving lives on the roads.

In the area of active safety, systems enabled by radar technology are becoming more prevalent. Adaptive Cruise Control (ACC) allows the driver to set a safe "follow distance" to the car in front of him or her, and automatically accelerates and decelerates the car to keep that follow distance constant. Some systems also include automatic braking features that will apply the brakes if the car in front stops quickly or if an object blocks the road. Likewise Blind Spot Detection systems can also depend on radar.




As in-car radar moves from being a luxury option to a standard safety feature, and from high-end to mid-range cars, the adoption and growth rates depend on the system cost. As radar becomes more affordable, and offers better performance in terms of target classification and range resolution, it will become a more popular option.

For system designers, there is a need to add these safety features without incurring substantial cost while still meeting the automotive industry's stringent quality requirements. Additionally, the radar sensor module must be kept small enough to fit into areas of the car, such as behind the bumper, which were not originally designed to house such electronics.

Information is shared by www.irvs.info

Tuesday, July 26, 2011

Designing Wi-Fi connectivity into your embedded "Internet thing"

Embedded systems for a large variety of applications--including appliances, automation systems, medical devices, entertainment systems, and energy management--today already use or can potentially use a wireless interface.

And just like using an available wired connectivity mechanism, they also can choose from a number of available wireless systems. Zigbee, Bluetooth and proprietary wireless mechanisms have been used in numerous deployments, but there is a growing trend toward standard IP-based 802.11

Wireless LAN, or Wi-Fi as it is also known, as the primary wireless interface in embedded systems. This has not happened as a natural evolution--rather there has been a concerted effort from manufacturers of Wi-Fi devices, and from the Wi-Fi Alliance, in enabling ease of integration of Wi-Fi in embedded environments.

In this article we look at what defines ease of integration of Wi-Fi and how embedded-system designers can easily build systems with Wi-Fi connectivity. We examine hardware and software integration as well as the optimization of Wi-Fi parameters for robust and energy-efficient connectivity for these applications.

Building special purpose Internet "things"

Embedded devices are built for a specific purpose and are, by definition, based on a microcontroller as the core functional block interfaced to multiple peripheral modules providing specialized functionality with limited memory resources. These devices often need to communicate with external monitoring or control systems, and when this communication is based on the universal TCP/IP mechanism, they form a part of the rapidly growing ‘Internet of Things.’

Whatever the application of an embedded device may be, the development of the device can be complex. Hardware functionality, software procedures and system-level considerations would all have to be optimally handled. Microcontroller vendors, therefore, spend much effort in creating development or evaluation kits that greatly ease the software and hardware integration efforts.

The possibility of having Wi-Fi connectivity in an embedded system enables a plethora of new applications the system can address. However, since Wi-Fi was created to provide high speed data connectivity for networking market to start with, integration such technology into embedded devices poses many challenges.

Wi-Fi integration into embedded devices is a fairly recent trend; designers have to face a number of new considerations that involve hardware and software, as well as system-level aspects such as regulatory certification.

Information is shared by www.irvs.info

Friday, July 22, 2011

A19 LED bulbs: What’s under the frosting?

Standard A19 format light bulbs, found today in most lamps and luminaires, are now available in LED versions that retail between $20 and $40 per 40-W- or 60-W-equivalent bulb. Some bulbs are dimmable, some not, and some only with specific dimmers. They all advertise 25,000 to 50,000 hours’ expected lifetime, based on three to four hours’ daily usage. If you use them appropriately and sparingly, you might expect your light bulbs to outlive you.

Each of the bulbs comes in a specially designed package, unlike tungsten filament and CFL bulbs, which ship in nondescript shrink wrap. The fancy packaging adds to the overall cost.



These bulbs clearly are not yet positioned as commodity items; they are expensive and are expected to last. But the price of electronic gadgets has dropped so much of late that longevity is no longer the main concern. So why is a common light bulb more expensive to buy than a cheap digital camera?

Looks count in a category as simple as light bulbs, and each of the bulbs we examined has a unique appearance. For example, the GE bulb has a ceramic neck and fins and a glass bulb, and is more costly than those using plastic and metal.

All of the bulbs have a small printed-circuit board contained within the neck, relying heavily on large electrolytic capacitors and transformers. The reliability factor of LEDs has increased tremendously.



All of the bulbs have a small pc board within the neck, relying heavily on large electrolytic capacitors and transformers.

Information is shared by www.irvs.info

Tuesday, July 19, 2011

Simulation techniques test automotive cluster display ECUs

With automotive display cluster being the main means to convey the status and information of a vehicle system and drive conditions, it is of utmost importance to ensure reliable functional testing for these cluster devices. This article describes the test coverage for a typical automotive cluster and how these tasks are done through system simulation techniques.

All vehicles are equipped with a panel to display to the driver status and information of the vehicle system and drive conditions. This cluster assembly (also known as dashboard) usually includes a speedometer, tachometer, fuel gauge, temperature gauge, odometer, and a set of telltale warning lamps. In addition, most modern vehicles are also capable of on-board diagnostics enabled by embedded systems connected through communication networks such as controller area network (CAN) and K-line.



All drivers rely on the dashboard for every vital piece of information: When the low fuel indicator lights up, it’s about time to visit the gas station; if the brake warning light remains on even though the handbrake is released, this could indicate insufficient brake fluid and it may be unsafe to use the vehicle. Therefore, a dashboard with guaranteed performance is important to provide a better and safer driving experience.

In the automotive industry, rather than using standard test commands, real loads and real stimuli are "must haves" during the product testing stages as the actual functions of a vehicle depend on these. With these loads and stimuli, clusters need to be correctly tested.

For a novice automotive electronics test engineer, this may well be very challenging as every section on a cluster calls for different sets of inputs or outputs. The door indicator on the dashboard gets input from car doors and has to respond correctly on whether to notify the driver on the status of the doors; the tachometer will be able to display the rate of rotation of the engine’s crankshaft with an engine running.

It might end up that the entire vehicle has to be placed on the production floor just for cluster testing. In manufacturing, production floor space equates to premium real estate, and cost must be well-managed, thus making the above approach unrealistic. In addition, to guarantee the accuracy for every manufacturing test performed, well-calibrated equipment must be used to achieve accurate test results. What should be the design of a practical test system for automotive cluster testing in production?

In manufacturing, most of the end-of-line functional test systems would resemble the standard Agilent TS5000 series system which houses industry-standard equipment. A typical test system will include:

1. Power supply to represent vehicle battery
2. Multimeter for voltage and current measurement
3. Frequency generator to source square waves of various frequencies
4. Switch/load box and plug-in cards (relay cards)



The instruments shown above are the main trunk of a tester in an automotive cluster testing. They are used for powering up, stimulating, switching, and performing measurements by the module.

Information is shared by www.irvs.info

Saturday, July 16, 2011

Instrumentation amplifier combines low noise, power and distortion

Analog Devices Inc. has introduced an ultra-low noise, low-power low-distortion instrumentation amplifier (in amp) for the precision measurement of extremely small signals present in noisy industrial operating environments.

The high-bandwidth AD8429 in amp is one of the fastest in amps on the market with a current feedback architecture that offers 15 MHz (G=1) bandwidth and a 22-V/μs slew rate, which is 30 percent higher than competing in amps, according to the company. With a low distortion of -130-dB, the AD8429 is robust enough for applications that demand reduced size, power and distortion levels, such as healthcare instrumentation, precision data acquisition equipment and industrial vibration analysis.

Key highlights:

* Low noise

o 1-nV/√Hz input noise
o 45-nV/√Hz output noise

* High accuracy dc performance

o 90-dB CMRR minimum (G = 1)
o 50-μV maximum input offset voltage
o 0.02 percent maximum gain accuracy (G = 1)

* AC specifications

o 80-dB CMRR to 5 kHz (G = 1)
o 15-MHz bandwidth (G = 1)
o 22-V/μs slew rate

Information is shared by www.irvs.info

Friday, July 15, 2011

The anatomy of a Human Interface Device

With the advent of such innovative and user-friendly products as smart phones and tablets, consumer expectations regarding the user experience have significantly changed. This article describes the basics of a Human/Machine interface and the ingredients of a Human Interface Device (HID).

Specifically, a user interface is the medium of communication or the ‘language’ between a human and a machine and the language popular right now is ‘Touch.’ As in any communication system, both the human and the machine need to speak the same language, which is equivalent to encoding and decoding in the machine world. The effectiveness of a user interface depends on how well the HID gathers input from the user and the system responds with feedback. Note that HID is used as a generic term in this article and does not mean the HID class as defined in the USB protocol.

Any signal in the real world is analog in nature. Even though the world is becoming digital rapidly, inputs to a system are still and will continue to be analog. Hence, we have to make the system adapt by converting the nature of the input from analog to digital with Human Interface Devices (HID). HIDs act as a bridge between humans and machines to decode human actions (touch, gesture, etc.) into machine understandable instructions.

Any Human Machine Interface will have the following sequence of operational steps:

* User action

* Identifying/decoding the user action

* Converting the user action into a machine control parameter

* Machine Acknowledgement / Feedback



Figure 1 – Human Machine Interface –The Ecosystem



Figure 2 – HID – Generic Architecture

Sensors are a primary part of any HID, translating any form of user signal into a machine understandable electrical signal. The output of the sensor is predominately analog and in most cases requires conditioning such as filtering or amplification before converting the signal into a digital representation. The Analog to Digital converter must differentiate noise from a signal, and is a key component of the HID. The simplest analog-to-digital converter in the world is a mechanical switch that simply converts the analog finger action (push, toggle, etc) to a digital ON or OFF.

The digital back end is responsible for receiving the digital data and sending it to the CPU in a format the system expects and understands. The CPU then responds back to the user with some form of feedback.

The physical location of the HID between the human and the machine is of high significance in the architecture of the HID. For example, in a tablet, the HID is a touch screen, which is a part of the system itself. In a game console like Nintendo Wii, the HID (Wii Remote) is in the hands of the user and is far removed from the centre console. There are some key parameters that will be considered during the design of a User Interface/HID. Some of them are ease of use, ease of design, ease of manufacturing, power, form factor, accuracy, size, cost, speed, resolution, scalability, and precision.



Figure 3: Capacitive Touch based HID

Human touch is the user action and the physical connection for the HID. The HID senses the capacitance change caused by the user action. The change in capacitance is then converted into digital counts which are then further processed to determine if there was a finger touch or not .The processed output is then sent to the host controller for further actions through a serial protocol such as SPI, I2C, or USB, etc.



Figure 4. Finger Navigation based HID

Human movement over a sensor is the user action and the physical connection for the HID. The HID senses the light due to the reflection of the human movement and converts that into (X, Y) coordinates. These coordinates are then communicated to the central console through wireless protocols such as iR, a proprietary RF standard, or Bluetooth, based on the system design.



Figure 5. : Speech-based HID

Voice is processed and sampled to generate secured/unsecured bit streams and sent to the central processor for further processing.



Figure 6. Movement-based HID

An accelerometer senses 3-axis movement made by the user and gives an analog output for each of the axes, which is then conditioned and converted using an appropriate ADC. The raw digital (X, Y, Z) coordinates or the processed action/controls are then transmitted through iR/RF to the central console for further processing.

Different user interfaces have become popular at different period of times for different reasons and the current trend is capacitive touch sensing. What is evident in any of the scenarios discussed above is that a mixed-signal platform is a requirement for any kind of HIDs. Programmability is also a key requirement for such a platform in order to quickly adapt the platform for different types of user interfaces. A programmable mixed signal system-on-chip platform such as PSoC provides a rich array of analog and digital building blocks, industry-standard processors, and wired/wireless interfaces that give the ability to create precisely the chip that is required for a specific HID design. Mixed-signal SoCs also remove the barriers faced by fixed-function MCUs and discrete analog and digital components by providing flexibility, integration, and analog functionality while meeting the need of providing the functionality required for a HID all in a single device.

Information is shared by www.irvs.info

Wednesday, July 13, 2011

Electronics and the environment: Five technologies to watch

Today, electronics is also being exploited for the accomplishment of a goal that was not a concern in the industry’s formative years but that has implications for the future of the planet: reducing power consumption. Specifically, the aim is a reduction in society’s reliance on fossil fuels, such as oil and coal, for generating electric power, thereby reducing the CO2 emissions that many scientists have fingered as a prime culprit in climate change.

The relationship between the evolution of electronics and power efficiency follows a trend line that predates and is steeper than Moore’s Law. Today’s average laptop, for example, is massively more energy-efficient than the vacuum-tube computers of the 1950s as measured by computations per kilowatt-hour. Project this trend out a decade, and some believe we’ll have laptops that run on ambient light and never need to be plugged in.

Here, we take a look at five electronics technologies that are playing a high-profile role in this power revolution. The list is not a ranking, nor is it definitive; rather, is it a collection of innovations that together will make a difference for the planet. [EE Times thanks our reader community for responding online to our call for suggested topics. — Ed.]

The transistor: Going 3-D

Imagine for a moment that the solid-state revolution had never occurred and we were still living in a world of vacuum-tube computing. Not only would our laptops be much bigger (think, four-bedroom house), but they would require significantly more electricity to perform the same operation—about a trillion times more.

From the vacuum-tube ENIAC era of the ’40s and ’50s to the present, computations per kilowatt-hour have doubled every 1.6 years, according to Jonathan Koomey, a consulting professor at Stanford University and co-author of a 2009 paper that details the relationship between computers’ energy use and their performance.

According to Koomey’s research, ENIAC operated at less than 1 kiloflops (103 floating-point operations/second) per kilowatt-hour, while today’s laptops can theoretically operate at 1 petaflops (1015 flops)/kWh.


Computations per kilowatt-hour As computers pack more computational power, the energy needed to perform a particular calculation decreases rapidlyÑa trend that predates MooreÕs Law. TodayÕs laptops are 1 trillion times more energy-efficient than the vacuum tube computers of the Õ40s.


Thanks to Intel’s announcement in early May that it was commercializing its three-dimensional “trigate” transistor in a 22-nm microprocessor, the continuation of Moore’s Law and Koomey’s trend line for computations/kWh is assured for at least a few more years. That’s because the novel fin architecture of the trigate transistor consumes less than half the power at the same performance level as a 2-D planar transistor on a 32-nm chip, according to Intel. Smaller, faster, cheaper … and dramatically more power efficient.

Extrapolating the power/performance trend line out a few years, Koomey believes it will have profound implications for the evolution of mobile computing and, in particular, the prospects for harnessing the information-gathering potential of wireless sensor networks.

That’s not all. As the number of data centers continues to rise, the operations they run will become that much more energy-efficient. Today, the world’s data centers account for about 1 percent of total electric energy consumed. Theoretically, all things being equal, a doubling of the number of data centers that use 3-D transistor IC architectures would have a negligible impact on the total energy requirement, while operating at much higher performance levels.

Information is shared by www.irvs.info

Monday, July 11, 2011

Tilera launches many-core 64-bit processor

The 64-bit processors are designed for cloud computing datacenters and come with 36, 64 or 100 cores that operate at clock frequencies up to 1.5-GHz. The 36-core version consumes 20 watts and samples in July, Tilera (San Jose, Calif.) said. The two larger versions consume 35- and 48-W, respectively, and are due to sample early in 2012. The three chips have total amounts of on-chip memory of 12, 20 and 32-Mbytes in ascending complexity order, the company said.

The company did not say how much the chips will cost.

One reason that Tilera can perform so well, the company asserted, is that the processors have been optimized for datacenter applications such as database mining and video transcoding.

In the 3000 series each core features a three-issue, 64-bit ALU with its own virtual memory system. Each core includes 32-kbytes of level-one instruction cache, 32-kbytes of L1 data cache and 256-kbytes of L2 cache, with up to 32-Mbytes of L3 coherent cache across the device. Processor utilization is optimized using memory stripping that uses up to four 72-bit DDR3 memory controllers supporting up to one terabyte (TB) total capacity. The 3000 series integrates networking hardware for preprocessing, load balancing, and buffer management of incoming traffic.

The 3000 series chips are designed to handle most common cloud applications and runs Linux release 2.6.36.

"We have been working with the largest cloud computing companies for two years to design a processor that addresses their biggest pain points," said Ihab Bishara, director of server solutions at Tilera, in a statement. "The Tile-Gx 3000 series has features like 64-bit processing, virtualization support and high processor frequency, which were specifically implemented for our web customers. The era of 20 to 30 percent incremental gains is over. The Gx-3000 series provides the order of magnitude improvements the industry is looking for."



Graphical depiction of Tilera's tile architecture, which the company says is power efficient and highly scalable. Source: Tilera Corp.

Information is shared by www.irvs.info

Thursday, July 7, 2011

Understanding the impact of digitizer noise on oscilloscope measurements

One of the most common sources of errors in measurements is the presence of vertical noise, which can decrease the accuracy of signal measurement and lead to such problems as inaccurate measurements as frequencies change. You can use ENOB (effective-number-of-bits) testing to more accurately evaluate the performance of digitizing systems, including oscilloscopes. The ENOB figure summarizes the noise and frequency response of a system. Resolution typically degrades significantly as frequency increases, so ENOB versus frequency is a useful specification. Unfortunately, when an ENOB specification is provided, it is often at just one or two points rather than across all frequencies.

In test and measurement, noise can make it difficult to make measurements on a signal in the millivolt range, such as in a radar transmission or a heart-rate monitor. Noise can make it challenging to find the true voltage of a signal, and it can increase jitter, making timing measurements less accurate. It also can cause waveforms to appear “fat” in contrast to analog oscilloscopes.

The ENOB concept

Digitizing performance is linked to resolution, but simply selecting a digitizer with the required number of bits, or quantizing level, at the desired amplitude resolution can be misleading because dynamic digitizing performance, depending on the technology, can decrease markedly as signal speeds increase. An 8-bit digitizer can decrease to 6, 4, or even fewer effective bits of performance well before reaching its specified bandwidth.

When designing or selecting an ADC, a digitizing instrument, or a test system, it is important to understand the various factors affecting digitizing performance and to have some means of evaluating overall performance. ENOB testing provides a means of establishing a figure of merit for dynamic digitizing performance. You can use it as an evaluation tool at various design stages and as a way to provide an overall system-performance specification. Because manufacturers don’t always specify ENOB for individual instruments or system components, you may need to do an ENOB evaluation for comparison. Essentially, ENOB is a means of specifying the ability of a digitizing device or instrument to represent signals of various frequencies - see figure 1.



the digitized signal increases. In this case, an 8-bit digitizer provides 8 effective bits of accuracy only at dc and low frequencies. As the signal you are digitizing increases in frequency or speed, performance drops to lower and lower values of effective bits.

This decline in digitizer performance manifests itself as an increasing level of noise on the digitized signal. Noise in this case refers to any random or pseudorandom error between the input signal and the digitized output. You can express this noise on a digitized signal in terms of SNR (signal-to-noise ratio):
SNR= rmsSIGNAL/rmsERROR,
where rmsSIGNAL is the root-mean-square value of the digitized signal and rmsERROR is the root-mean-square value of the noise error.
The following equation yields the relationship to effective bits:
EB=log2(SNR)−½log2(1.5)−log2(A/FS),
where EB represents the effective bits, A is the peak-to-peak input amplitude of the digitized signal, and FS is the peak-to-peak full-scale range of the digitizer’s input.
Other commonly used equations include
EB=N−log2(rmsERROR/ IDEAL_QUANTIZATION_ERROR)
where N is the nominal, or static, resolution of the digitizer, and, EB=−log2(rmsERROR)×√1̅2̅/FS.

These equations employ a noise, or error, level that the digitizing process generates. In the second equation above for EB, the ideal quantization error term is the rms error in the ideal, N-bit digitizing of the input signal. The IEEE Standard for Digitizing Waveform Recorders (IEEE Standard 1057) defines the first two equations (Reference 1). An alternative for the third equation assumes that the ideal quantization error is uniformly distributed over one LSB (least-significant bit) peak to peak. This assumption allows you to replace the ideal quantization error term with FS/(2N√1̅2̅), where FS is the digitizer’s full-scale input range.

These equations employ full-scale signals. Actual testing may use test signals at less than full-scale—50 or 90% of full-scale, for example. Improved ENOB results can improve this result, so comparisons of ENOB specifications or testing must account for both test-signal amplitude and frequency.

Noise or error relating to digitizing can come from a number of sources. Even in an ideal digitizer, quantizing causes a minimum noise or error level amounting to ±½ LSB. This error is an inherent part of digitizing (Figure 2).



It is the resolution limit, or uncertainty, associated with ideal digitizing. A real-life digitizer adds further errors to this basic ideal error floor. These additional real-life errors can include dc offset; ac offset, or “pattern” errors, sometimes called fixed pattern distortion, associated with interleaved sampling methods; dc and ac gain error; analog nonlinearity; and digital nonmonotonicity. You must also consider phase errors; random noise; frequency-timebase inaccuracy; aperture uncertainty, or sample-time jitter; digital errors, such as data loss due to metastability, missing codes, and the like; and other error sources, such as trigger jitter.

Information is shared by www.irvs.info

Wednesday, July 6, 2011

MEMS sensors for advanced mobile applications—An overview

MEMS sensors include, among others, accelerometers (ACC), gyroscopes (GYRO), magnetometers (MAG), pressure sensors (PS) and microphones (MIC). These sensors have been integrated in the last few years in portable devices because of their low cost, small size, low power consumption and high performance.

Fast CPU’s with multi-tasking OS platforms, high sensitive GPS receivers, 3G / 4G wireless communication chipsets, high-resolution digital video cameras, touch screen LCD displays and large storage size are common in smartphones. The use of MEMS sensors is then no longer limited to existing applications such as screen rotation, power saving, motion detection, E-Compass and 3D gaming. More advanced applications for MEMS sensors, such as Augmented Reality (AR), Location-Based Services (LBS), Pedestrian Dead-reckoning (PDR) are currently being developed.

This article discusses the role of MEMS sensors in advanced applications in mobile devices including Mobile Augmented Reality (MAR), LBS and the solution of MEMS sensor fusion integrated with a GPS receiver to determine the position and orientation using the dead-reckoning method.

Augmented Reality

Augmented reality (AR) is not a new topic. By definition, AR is a feature or user interface that is implemented by the interaction of superimposed graphics, audio and other sensing enhancements over a real-world environment that is displayed in real-time, making it interactive and manipulable. The Integration of 3D virtual information into real-world environment helps provide users with a tangible feeling of the virtual objects that exist around them.

Recently, there have been a few successful applications of AR. For example, vehicle safety application can provide information on road conditions and surrounding cars by projecting these to the windshield. Another application is to display information of an object such as restaurant or supermarket, etc., when the smartphone is pointed to the object with known position and orientation. Also, one can find the nearest subway station in a big city by moving the phone fully round 360 degrees, locating the subway and following the directions to the destination.

Social networking is playing a key role in peoples’ current life. When approaching a shopping center, a user can point the phone to the shopping center sending friends virtual information augmented on his location and the surrounding environment. Vice versa, the user will have information on his friends’ whereabouts. Therefore, AR is a new way of changing the feeling to the real world.

The key components available in smartphones for MAR are shown in Figure 1.



Figure 1. Smartphone structure for MAR

* Digital video camera: Used to stream information about the real-world environment and display captured video on the LCD touch screen. Currently 5-Megapixel or higher camera sensors are available in new smartphones.

* CPU, Mobile OS, UI and SDK: Components are the core of a smartphone. 1GHz or higher dual-core CPU with 512MB RAM and 32GB storage space can be found in new smartphones. UI and SDK give developers a simple way to call APIs to access the graphics, wireless communications, database and MEMS sensors raw data without knowing the details behind during their own applications development.

* High-sensitivity GPS Receiver, or A-GPS or DGPS: Fixes the user current location in terms of latitude and longitude when significant satellites are captured. A lot of effort has been invested over the years to increase the GPS sensitivity and positioning accuracy for indoor and urban canyon areas when satellite signals are degraded and multipath errors occur.

* Wireless link for data transmission including GSM/GPRS, Wi-Fi, Bluetooth and RFID: Provides Internet access to retrieve the database online of the object that is near-by the current location and to give a rough information about positioning while waiting for GPS fix or if GPS is not available. Other short-range wireless links such as WLAN, Bluetooth and RFID can also be used for indoor positioning with adequate accuracy if the transmitters are pre-installed.

* Local or online Database: Utilized for virtual object information augmented on the real-world video display. When the object is aligned to the current position and orientation, its information can be retrieved from online or locally saved database. Users can then click the hyperlink or the icon on the touch screen to receive more detailed information.

* LCD Touch Screen with digital Map: Provides high-resolution UI that displays real-world video augmented by virtual object information. With the digital map, users can know the current location with street names and don’t need to wear special goggles.

* MEMS sensors (ACC, MAG, GYRO and PS): Self-contained components that work anywhere and anytime. Due to low cost, small size, lightweight, low power consumption and high performance, these have become popular for pedestrian dead-reckoning (PDR) application to obtain indoor and outdoor position and orientation with the integration of GPS receiver. The following sections will discuss their key roles in how to increase the accuracy of indoor position and orientation.

The main challenge of the MAR is to obtain accurate and reliable position and orientation anytime and anywhere to align the virtual objects with the real world.

Indoor position and orientation detection

Although many smartphones have a built-in GPS receiver that works well for outdoor location and driving direction displayed on a digital map, many times GPS receivers cannot get a position fix indoors or in urban canyon areas. Even in outdoor environments, GPS cannot give accurate orientation or heading information when a pedestrian or car is not moving. Also, GPS aren’t able to distinguish small height changes. And, moreover, GPS cannot provide the mobile user or vehicle attitude information such as pitch/roll/heading with a single antenna.

Differential GPS (DGPS) is able to obtain a few centimeters of position accuracy, though it needs a second GPS as a base station to transmit in a certain range coarse/acquisition code as the reference position to the mobile GPS receivers. Assisted GPS (A-GPS) can help to some extent for the GPS to get a fix indoor, but sometimes A-GPS still can’t provide an accurate position in an acceptable time interval. With at least three GPS antennas, it’s possible for the GPS to detect attitude information when the mobile user is not moving. However, there is very little feasibility to have multiple GPS antennas in a smartphone.

As a result, a GPS-only smartphone is not capable of providing accurate position and attitude for a mobile user. Self-contained MEMS sensors are an excellent option to assist the GPS for integrated navigation systems for indoor and outdoor LBS.

Modern GPS receivers have an absolute position accuracy of 3 to 20 meters when the antenna has a clear view of sky. It doesn’t drift over time. Strapdown inertial navigation system (SINS) based on MEMS sensors can provide accurate position in a short time, but it will quickly drift over time depending on the performance of the motion sensors. PDR is a relative navigation system based on step length and orientation to calculate the distance traveled for indoor navigation from initial known position. The position accuracy doesn’t drift over time, but the heading accuracy needs to be maintained in a magnetic-disturbed environment and the step length needs to be calibrated by the GPS for acceptable location accuracy.

Based on SINS theory, inertial sensors (3-axis ACC and 3-axis GYRO) are categorized as navigation-grade, tactical-grade and commercial-grade according to their stability of the inherent biases and scale factors. The horizontal position error from unaided ACC only and GYRO only can be calculated from the following two equations [1].



The above equations can be used to calculate the typical inertial sensors performance and the corresponding horizontal position error from their long-term bias stability characteristics. These errors will not grow over time when integrated with GPS. Other error sources such as misalignment, non-linearity and temperature effect, which will cause extra position errors, should also be considered in these calculations.

Recent advances in MEMS processes, MEMS ACC and GYRO have been continuously providing higher performance and nearer to the level of tactical-grade devices. In a short time period such as 1 minute, unaided ACC and GYRO can give relatively accurate position measurements. This is useful to form GPS/SINS integrated navigation systems when the GPS signal is blocked.

Usually for consumer electronics five percent of error on distance travelled is acceptable for indoor PDR. For example, when the pedestrian walks 100 meters, the error should be within 5 meters. This requires the heading error to be within ±2° to ±5° [2]. For instance, when heading error is 2°, then the position error for 100 meters traveled distance will be 3.5 meters [= 2*100m*sin (2°/2)].

In addition, MEMS pressure sensor is able to measure absolute air pressure with respect to sea level. Therefore, the altitude of mobile user from 600 meters below sea level to 9000 meters above sea level can be determined to aid GPS height measurement [2]. Figure 3 shows the PDR block diagram for MEMS sensors and GPS.



Figure 2. PDR block diagram in a mobile device

MEMS sensor fusion

Sensor fusion is a set of digital filtering algorithms to compensate the disadvantages of each individual sensor and then output accurate dynamic attitude information pitch/roll/heading. The purpose of sensor fusion is to take each sensor measurement data as input and then apply digital filtering algorithms to compensate each other and output accurate and responsive dynamic attitude results. Therefore, the heading or orientation is immune to environmental magnetic disturbance as to the bias drift of the gyroscope.

Tilt compensated E-Compass, which consists of a 3-axis ACC and a 3-axis MAG, can provide heading with respect to earth magnetic north. But this heading is sensitive to environmental magnetic disturbance. With the installation of a 3-axis GYRO, it is possible to develop 9-axis sensor fusion solution to maintain accurate heading anywhere and anytime.

When designing a system using ACC, GYRO, MAG and PS, it is important to understand the advantages and disadvantages of each MEMS sensor as shown in the table below.

* ACC: It can be used for tilt compensated digital compass in static or slow motion and it can be used for pedometer step counter and to detect if the system is in motion or at rest. However, an ACC cannot differentiate the true linear acceleration from earth gravity components when the system is at motion in 3D space and it is sensitive to shake and vibration.

* GYRO: It can continuously provide rotation matrix from system body coordinates to local earth horizontal coordinates and it can aid the digital compass for heading calculations when the MAG gets disturbed. But the bias drift over time leads to unlimited attitude and position error.

* MAG: It can calculate absolute heading with respect to earth magnetic north and can be used to calibrate the gyroscope sensitivity but it is sensitive to environmental magnetic interference fields.

* PS: It can be used to tell which floor you are on for indoor navigation and aid GPS for altitude calculation and positioning accuracy when GPS signal is degraded but it is sensitive to wind flow and weather conditions.

Due to the above considerations, the Kalman filter appears today as the most common mathematical instrument to fuse the information coming from the different sensors. It weights the different sensors contribution most heavily where they have the best performances, thus providing more accurate and stable estimates than a system based on any one medium alone [3].

Currently, quaternion based extended Kalman filter (EKF) is a popular scheme for sensor fusion because quaternion has only 4 elements compared to rotation matrix with 9 elements and it can also avoid the singularity issue that is present in the rotation matrix [3].

Conclusion

The main challenge for advanced mobile applications, such as the AR, is accurate position and orientation anywhere and anytime because the AR is closely related to the PDR or the LBS. With the limitation of GPS receiver, MEMS sensors are an attractive solution for indoor PDR since most of these sensors are already available in smart phones.

In order to achieve the allowable five percent indoor PDR position error, MEMS sensor fusion algorithms need to be developed to compensate the disadvantages of each sensor. As the performance of MEMS sensors is continuously improving, the user-independent SINS/GPS integrated navigation system will be common in smart phones in the near future.

Information is shared by www.irvs.info

Friday, July 1, 2011

Non-volatile solutions to repetitive event and transaction logging

Smart power meters, automotive engine and brake controls, robotic axis controls, solar inverters, valve controls, and a slew of other consumer and industrial applications have one common memory need: a memory that can store the detail related to an ongoing operation in a small time window, capture readings until that window of time completes, and transfer the captured results onward, or, once the operation completes successfully, reset the memory and begins capturing the next set of data in the next repeating time window.

This article compares memory technologies and products that are used to perform this function. First, to align our thinking, we’ll describe several repetitive event examples in this memory class.

New energy meters on the side of homes and businesses offer the user the ability to adjust power usage throughout the day and to take advantage of lower kW/hr billing rates in non-peak hours. To do this the meter must continuously make power readings, log those readings into a small serial memory, then every few minutes upload this data to a local area consolidator for long term tracking, and then reset and begin a new collection of readings. Lost readings of just a few minutes across a neighborhood due to a power issue can cost the power company thousands of dollars. So, non-volatility is crucial.

Another example is a factory floor robotic movement, consisting of a series of small step movements that are logged until the movement is completed. Once completed, the repetitive motion and step logging begins again. If power is disturbed, it is crucial to the operation that the last completed step is remembered so the movement can continue from that position on power restore. Again, non-volatility is critical.

Most electro-mechanical systems such as HVAC control, solar dish tracking for power inverters and the like, create algorithm based data that learns and adjusts operating parameters to achieve maximum efficiencies. Captured data needs to be retained across power glitches or disturbs.

Serial memory choices
The ideal memory for repetitive event applications will have a range of densities from as small as 16kbits up to 4Mbits (lowest cost), a serial SPI interface (cost, size, switching noise), a perfect non-volatility (no readings lost), an infinite or near infinite NV endurance (number of Writes or Stores per lifetime) and it will run at the highest allowed SPI clock speeds for both Read and Write. The ideal memory will offer both random access Read/Write and sequential Read/Write, including a rollover Read/Write from highest address back to zero (simplicity). Block, page, sector, and chip erase requirements should not impact performance, and the device should rely on a high yielding technology with a good manufacturing history.

Over the last decade SPI interfaces have proliferated. Serial SRAM (Microchip, On), DataFlash (Atmel), serial Flash (Micron), and serial EEPROM (Atmel) memory products have seen growing adoption in small memory density applications where they are used to capture calibration and parametric data, user data and identification details, and hold updatable program code.

Separately, a few suppliers have introduced serial memory products specifically targeted at repetitive event applications.

Cypress Semiconductor offers a family of serial nvSRAM products (non-volatile SRAM); Ramtron created a product line based on FRAM (ferroelectric RAM); and startup Ever-spin is working to introduce a serial solution based on an MRAM (Magnetoresistive RAM) technology. These focused technologies offer the best match to the repetitive event requirements listed above, particularly the at-speed Read/Writes and non infinite NV endurance needs, albeit at a slight value based unit cost premium over serial flash and serial EEPROM solutions. Let’s compare the key features of these memories in repetitive event applications.

Serial SRAM: meets all speed and density requirement; uses CMOS process; has excellent manufacturing history; lacks non-volatility, and use of battery backup to create non-volatility is cost and area inhibitive DataFlash: meets speed and density requirement; uses CMOS with non-volatile process module; excellent manufacturing history; not fully non-volatile, will lose data in SRAM buffers on power glitch; endurance is typically only 100k STOREs; chip erase takes multiple seconds and this erase time increases with density.

Serial Flash and Serial EEPROM: meets speed and density requirements except for long Write on block or page erase times; NV Stores all data; CMOS with non-volatile process module; excellent manufacturing history; lowest market price; endurance is typically only 100k STOREs; chip erase takes multiple seconds and erase time increases with density.

nvSRAM: designed specifically as a repetitive event memory; meets all speed and density requirements, CMOS with nonvolatile process module, excellent manufacturing history, near infinite endurance (NV Store count is only consumed on a power down – device operates as a serial SRAM with infinite endurance during power up); fully random access on Read and Write with all sequential Read/Write capabilities FRAM and MRAM: designed specifically as a repetitive event memory; meets all speed and density requirement; unique process; custom fabrication; near infinite endurance; fully random access on Read and Write with all sequential Read/Write capabilities

In practice, we find the designer weighs the above criteria, but may apply extra weight to specific criteria or design preferences. For example, if having an endurance count above 1 million is a hard requirement, only nvSRAM, FRAM, or MRAM can be selected. If the designer wants to further limit his selection to CMOS processes and established suppliers, then nvS-RAM rises to the top. If low endurance and long chip erase times can be tolerated, market price is likely the key criteria, and a serial EEPROM or serial Flash may be the winning solution.

Information is shared by www.irvs.info