Tuesday, 8 January 2019

E textiles

E-textiles, also known as electronic textiles, smart textiles, or smart fabrics, are fabrics that enable digital components (including small computers), and electronics to be embedded in them. Much intelligent clothing, smart clothing, wearable technology, and wearable computing projects involve the use of e-textiles.

Electronic textiles are distinct from wearable computing because the emphasis is placed on the seamless integration of textiles with electronic elements like microcontrollers, sensors, and actuators. Furthermore, e-textiles need not be wearable. For instance, e-textiles are also found in interior design.


The related field of fibertronics explores how electronic and computational functionality can be integrated into textile fibers.



History


The basic materials needed to construct e-textiles, conductive threads and fabrics have been around for over 1000 years. In particular, artisans have been wrapping fine metal foils, most often gold and silver, around fabric threads for centuries. At the end of the 19th century, as people developed and grew accustomed to electric appliances, designers and engineers began to combine electricity with clothing and jewelry developing a series of illuminated and motorized necklaces, hats, broaches, and costumes. For example, in the late 1800s, a person could hire young women adorned in light-studded evening gowns from the Electric Girl Lighting Company to provide cocktail party entertainment.


In 1968, the Museum of Contemporary Craft in New York City held a groundbreaking exhibition called Body Covering that focused on the relationship between technology and apparel. The show featured astronauts space suits along with clothing that could inflate and deflate, light up, and heat and cool itself. Particularly noteworthy in this collection was the work of Diana Dew, a designer who created a line of electronic fashion, including electroluminescent party dresses and belts that could sound alarmed sirens.


In 1985, an inventor by the name of Harry Wainwright created the first fully animated sweatshirt consisting of fiber optics, LEDs, and a microprocessor to control individual frames of animation resulting in a full-color cartoon on the surface of apparel. Wainwright went on to invent the first machine in 1995 enabling fiber optics to be machined into fabrics, the process needed for manufacturing enough for mass markets and hired a German machine designer, Herbert Selbach, from Selbach Machinery to produce the world first CNC machine able to automatically implant fiber optics into any flexible material (www.usneedle.com) in 1997. Receiving the first of a dozen patents based on LED/Optic displays and machinery in 1989, the first CNC machines went into production in 1998 beginning with the production of animated coats for Disney Parks in 1998. The first ECG Bio-Physical display jackets employing LED/Optic displays were created by Wainwright and David Bychkov, the CEO of Exmovere.



Uses


Health monitoring of vital signs of the wearer such as heart rate, respiration rate, temperature, activity, and posture.


Sports training data acquisition


Monitoring personnel handling hazardous materials


Tracking the position and status of soldiers in action


Monitoring pilot or truck driver fatigue

Flexible electronics

Flexible electronics, also known as flex circuits, is a technology for assembling electronic circuits by mounting electronic devices on flexible plastic substrates, such as polyimide, PEEK or transparent conductive polyester film. Additionally, flex circuits can be screen printed silver circuits on polyester. Flexible electronic assemblies may be manufactured using identical components used for rigid printed circuit boards, allowing the board to conform to the desired shape, or to flex during its use. These flexible printed circuits (FPC) are made with a photolithographic technology. An alternative way of making flexible foil circuits or flexible flat cables (FFCs) is laminating very thin (0.07 mm) copper strips in between two layers of PET. These PET layers, typically 0.05 mm thick, are coated with an adhesive which is thermosetting and will be activated during the lamination process. FPCs and FFCs have several advantages in many applications:

Tightly assembled electronic packages, where electrical connections are required in 3 axes, such as cameras (static application).

Electrical connections where the assembly is required to flex during its normal use, such as folding cell phones (dynamic application).

Electrical connections between sub-assemblies to replace wire harnesses, which are heavier and bulkier, such as in cars, rockets, and satellites.

Electrical connections where board thickness or space constraints are driving factors.


Advantage of FPC

Ease for manufacturing or assembly

Single-Sided circuits are ideal for dynamic or high-flex applications

Stacked FPCs in various configurations


Applications

Flex circuits are often used as connectors in various applications where flexibility, space savings, or production constraints limit the serviceability of rigid circuit boards or hand wiring. A common application of flex circuits is in computer keyboards; most keyboards use flex circuits for the switch matrix.

In LCD fabrication, glass is used as a substrate. If thin flexible plastic or metal foil is used as the substrate instead, the entire system can be flexible, as the film deposited on top of the substrate is usually very thin, on the order of a few micrometers.

Organic light-emitting diodes (OLEDs) are normally used instead of a back-light for flexible displays, making a flexible organic light-emitting diode display.

Most flexible circuits are passive wiring structures that are used to interconnect electronic components such as integrated circuits, resistor, capacitors and the like, however, some are used only for making interconnections between other electronic assemblies either directly or by means of connectors.

In the automotive field, flexible circuits are used in instrument panels, under-hood controls, circuits to be concealed within the headliner of the cabin, and in ABS systems. In computer peripherals flexible circuits are used on the moving print head of printers, and to connect signals to the moving arm carrying the read/write heads of disk drives. Consumer electronics devices make use of flexible circuits in cameras, personal entertainment devices, calculators, or exercise monitors.

Flexible circuits are found in industrial and medical devices where many interconnections are required in a compact package. Cellular telephones are another widespread example of flexible circuits.


History

Flexible circuits technology has a surprisingly long history. Patents issued at the turn of the 20th century show clear evidence that early researchers were envisioning ways of making flat conductors sandwiched between layers of insulating material to layout electrical circuits to serve in early telephony switching applications. One of the earliest descriptions of what could be called a flex circuit was unearthed by Dr. Ken Gilleo and disclosed in an English patent by Albert Hansen in 1903 where Hansen described a construction consisting of flat metal conductors on paraffin-coated paper. Thomas Edison lab books from the same period also indicate that he was thinking to coat patterns cellulose gum applied to linen paper with graphite powder to create what would have clearly been flexible circuits, though there is no evidence that it was reduced to practice.

Five G

5G (5th generation mobile networks or 5th generation wireless systems) denotes the next major phase of mobile telecommunications standards beyond the current 4G/IMT-Advanced standards. 5G is also referred to as beyond 2020 mobile communications technologies. 5G does not describe any particular specification in any official document published by any telecommunication standardization body.

Although updated standards that define capabilities beyond those defined in the current 4G standards are under consideration, those new capabilities are still being grouped under the current ITU-T 4G standards.


Background of 5G

A new mobile generation has appeared approximately every 10th year since the first 1G system, Nordic Mobile Telephone, was introduced in 1981. The first 2G system started to roll out in 1991, the first 3G system first appeared in 2001 and 4G systems fully compliant with IMT Advanced were standardized in 2012. The development of the 2G (GSM) and 3G (IMT-2000 and UMTS) standards took about 10 years from the official start of the R&D projects, and development of 4G systems started in 2001 or 2002. Predecessor technologies have occurred on the market a few years before the new mobile generation, for example, the pre-3G system CdmaOne/IS95 in the US in 1995, and the pre-4G systems Mobile WiMAX in South-Korea 2006, and first release-LTE in Scandinavia 2009.

Mobile generations typically refer to non-backward-compatible cellular standards following requirements stated by ITU-R, such as IMT-2000 for 3G and IMT-Advanced for 4G. In parallel with the development of the ITU-R mobile generations, IEEE and other standardization bodies also develop wireless communication technologies, often for higher data rates and higher frequencies but shorter transmission ranges. The first gigabit IEEE standard was IEEE 802.11ac, commercially available since 2013, soon to be followed by the multi-gigabit standard WiGig or IEEE 802.11ad.


History

In 2008, the South Korean IT R&D program of 5G mobile communication systems based on beam-division multiple access and relays with group cooperation was formed.

On 8 October 2012, the UK University of Surrey secured 35M for new 5G research centre, joint funded between the British government UK Research Partnership Investment Fund (UKRPIF) and a consortium of key international mobile operators and infrastructure providers -including Huawei, Samsung, Telefonica Europe, Fujitsu Laboratories Europe, Rohde & Schwarz, and Aircom International it will offer testing facilities to mobile operators keen to develop a mobile standard that uses less energy and radio spectrum whilst delivering faster than current 4G speeds, with aspirations for the new technology to be ready within a decade.

In February 2013, ITU-R Working Party 5D (WP 5D) started two study items:

1. Study on IMT Vision for 2020 and beyond

2. Study on future technology trends for terrestrial IMT systems.

Both aiming at having a better understanding of future technical aspects of mobile communications towards the definition of the next generation mobile.

On 12 May 2013, Samsung Electronics stated that they have developed the world first 5G system.

In July 2013, India and Israel have agreed to work jointly on the development of the fifth generation (5G) telecom technologies.

On 1 October 2013, NTT (Nippon Telegraph and Telephone), the same company to launch world first 5G network in Japan, wins Minister of Internal Affairs and Communications Award at CEATEC for 5G R&D efforts.

On 6 November 2013, Huawei announced plans to invest a minimum of $600 million into R&D for next-generation 5G networks capable of speeds 100 times faster than modern LTE networks.

On 8 May 2014, NTT DoCoMo start testing 5G mobile networks with Alcatel Lucent, Ericsson, Fujitsu, NEC, Nokia and Samsung.

At the end of September 2014, Dresden university inaugurates a 5G laboratory in partnership with Vodafone.

Humanoid robot

A humanoid robot is a robot with its body shape built to resemble that of the human body. A humanoid design might be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robots may also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.


Purpose

Humanoid robots are used as a research tool in several scientific areas.

Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots. On the other side, the attempt to the simulation of the human body leads to a better understanding of it.

Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time. It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans. See transhumanism.

Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis, and forearm prosthesis. Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs. Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is deceptively great.

They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, play music, dances, and speaks to her audiences at Universal Studios. Several Disney attractions employ the use of animation, robots that look, move, and speak much like human beings, in some of their theme park shows. These animations look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy. Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010. Humanoid robots, especially with artificial intelligence algorithms, could be useful for future dangerous and/or distant space exploration missions, without having the need to turn back around again and return to Earth once the mission is completed.


Sensors

A sensor is a device that measures some attribute of the world. Being one of the three primitives of robotics (besides planning and control), sensing plays an important role in robotic paradigms.

Sensors can be classified according to the physical process with which they work or according to the type of measurement information that they give as output. In this case, the second approach was used.

Machine Vision

Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry. The scope of MV is broad. MV is related to, though distinct from, computer vision.


Application

The primary uses for machine vision are automatic inspection and industrial robot guidance. Common machine vision applications include quality assurance, sorting, material handling, robot guidance, and optical gauging.

1. For automatic PCB inspection.

2. For wood quality inspection

3. Final inspection of sub-assemblies

4. Engine part inspection

5. Label inspection of products

6. Checking medical devices for defects

7. Final inspection cells

8. Robot guidance and checking orientation of components

9. Packaging Inspection 

10. Medical vial inspection

11. Food pack checks

12. Verifying engineered components


Methods

Machine vision methods are defined as both the process of defining and creating an MV solution and as the technical process that occurs during the operation of the solution. Here the latter is addressed. As of 2006, there was little standardization in the interfacing and configurations used in MV. This includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange. Nonetheless, the first step in the MV sequence of operation is the acquisition of an image, typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing. MV software packages then employ various digital image processing techniques to extract the required information and often make decisions (such as pass/fail) based on the extracted information.


Imaging

While conventional (2D visible light) imaging is most commonly used in MV, alternatives include imaging various infrared bands, line scan imaging, 3D imaging of surfaces and X-ray imaging. Key divisions within MV 2D visible light imaging are monochromatic vs. color, resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes. The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. Other 3D methods used for machine vision are a time of flight, grid-based and stereoscopic.

The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a smart camera or smart sensor. When separated, the connection may be made to specialized intermediate hardware, a frame grabber using either a standardized (Camera Link, CoaXPress) or custom interface. MV implementations also have used digital cameras capable of direct connections (without a frame grabber) to a computer via FireWire, USB or Gigabit Ethernet interfaces.

Molecular electronics

Molecular electronics is the study and application of molecular building blocks for the fabrication of electronic components. It is an interdisciplinary area that spans physics, chemistry, and materials science. The unifying feature is the use of molecular building blocks for the fabrication of electronic components. Due to the prospect of size reduction in electronics offered by molecular-level control of properties, molecular electronics has generated much excitement. Molecular electronics provides a potential means to extend Moore Law beyond the foreseen limits of small-scale conventional silicon integrated circuits.


Molecular scale electronics

Molecular scale electronics, also called single-molecule electronics, is a branch of nanotechnology that uses single molecules, or nanoscale collections of single molecules, as electronic components. Because single molecules constitute the smallest stable structures possible, this miniaturization is the ultimate goal for shrinking electrical circuits.

Conventional electronic devices are traditionally made from bulk materials. The bulk approach has inherent limitations in addition to becoming increasingly demanding and expensive. Thus, the idea was born that the components could instead be built up atom for atom in a chemistry lab (bottom-up) as opposed to carving them out of bulk material (top-down). In single molecule electronics, the bulk material is replaced by single molecules. That is, instead of creating structures by removing or applying material after a pattern scaffold, the atoms are put together in a chemistry lab. The molecules utilized have properties that resemble traditional electronic components such as a wire, transistor or rectifier.

Single molecule electronics is an emerging field, and entire electronic circuits consisting exclusively of molecular-sized compounds are still very far from being realized. However, the continuous demand for more computing power together with the inherent limitations of the present day lithographic methods make the transition seem unavoidable. Currently, the focus is on discovering molecules with interesting properties and on finding ways of obtaining reliable and reproducible contacts between the molecular components and the bulk material of the electrodes.

Molecular electronics operates in the quantum realm of distances less than 100 nanometers. Miniaturization down to single molecules brings the scale down to a regime where quantum effects are important. As opposed to the case in conventional electronic components, where electrons can be filled in or drawn out more or less like a continuous flow of charge, the transfer of a single electron alters the system significantly. The significant amount of energy due to charging has to be taken into account when making calculations about the electronic properties of the setup and is highly sensitive to distances to conducting surfaces nearby.

One of the biggest problems with measuring on single molecules is to establish reproducible electrical contact with only one molecule and doing so without shortcutting the electrodes. Because the current photolithographic technology is unable to produce electrode gaps small enough to contact both ends of the molecules tested (in the order of nanometers) alternative strategies are put into use. These include molecular-sized gaps called break junctions, in which a thin electrode is stretched until it breaks. Another method is to use the tip of a scanning tunneling microscope (STM) to contact molecules adhered at the other end to a metal substrate.

Near field Communication

Near-field (or nearfield) communication (NFC) is a set of standards for smartphones and other mobile devices to establish radio communication with each other by touching them together or bringing them into close proximity, usually no more than a few centimeters.

Current NFC systems use a radio frequency of 13.56 MHz, corresponding to a wavelength of 22.11 m. To efficiently generate a far-field, which means to send out radio waves of this wavelength, one typically needs an antenna of a quarter wavelength, in practice a meter or more. If the antenna is just a few centimeters long, it will only set up the so-called near-field around itself, with length, width, and depth of the field roughly the same as the dimensions of the antenna. Very little energy will radiate away, it is essentially a stationary electromagnetic field pulsating at 13.56 MHz. If you bring another similarly small antenna into this field, you will induce an electric potential into it, alternating at the said frequency. By modulating the signal in the active antenna, one can, of course, transmit a signal to the passive, receiving antenna. Present and anticipated applications include contactless transactions, data exchange, and simplified setup of more complex communications such as Wi-Fi. Communication is also possible between an NFC device and an unpowered NFC chip, called a tag.


Uses

NFC builds upon RFID systems by allowing two-way communication between endpoints, where earlier systems such as contactless smart cards were one-way only. It has been used in devices such as Google Nexus, running Android 4.0 Ice Cream Sandwich, named with a feature called Android Beam which was first introduced in Google Nexus.


Social networking

NFC can be used in social networking situations, such as sharing contacts, photos, videos or files, and entering multiplayer mobile games.


Commerce

NFC devices can be used in contactless payment systems, similar to those currently used in credit cards and electronic ticket smartcards, and allow mobile payment to replace or supplement these systems.

With the release of Android 4.4, Google introduced new platform support for secure NFC-based transactions through Host Card Emulation (HCE), for payments, loyalty programs, card access, transit passes, and other custom services. With HCE, any app on an Android 4.4 device can emulate an NFC smart card, letting users tap to initiate transactions with an app of their choice. Apps can also use a new Reader Mode so as to act as readers for HCE cards and other NFC-based transactions.

On September 9, 2014, Apple also announced support for NFC-powered transactions as part of their Apple Pay program.

Pill camera

A pill camera is a piece of equipment used for a procedure known as capsule endoscopy. It was developed in the late 20th century and was approved for use by the FDA in 2001.

The camera is about 1 inch long and one-half inch in diameter, with rounded edges making it shaped like a drug capsule (although slightly larger). It is comprised of a camera, flash, plastic capsule, and transmitter (at present, usually Bluetooth (TM)). It is small enough to be swallowed.

The pill camera is most often used when a disease of the small intestine is suspected. The upper digestive tract can usually be examined with an endoscope, and for problems with the large intestine, a colonoscopy is preferred. However, neither of those two procedures allow examination of the small intestine. In addition, the pill camera is minimally invasive. However, unlike endoscopy and colonoscopy, a pill camera cannot be used to treat a pathology. Pill camera use Capsule endoscopy. Capsule endoscopy is a way to record images of the digestive tract for use in medicine. The capsule is the size and shape of a pill and contains a tiny camera. After a patient swallows the capsule, it takes pictures of the inside of the gastrointestinal tract. The primary use of capsule endoscopy is to examine areas of the small intestine that cannot be seen by other types of endoscopy such as colonoscopy.


Uses

Capsule endoscopy is used to examine parts of the gastrointestinal tract that cannot be seen with other types of endoscopy. Upper endoscopy, also called EGD, uses a camera attached to a long flexible tube to view the esophagus, the stomach and the beginning of the first part of the small intestine called the duodenum. A colonoscopy, inserted through the rectum, can view the colon and the distal portion of the small intestine, the terminal ileum. These two types of endoscopy cannot visualize the majority of the middle portion of the gastrointestinal tract, the small intestine. Capsule endoscopy is useful when the disease is suspected in the small intestine, and can sometimes diagnose sources of occult bleeding or causes of abdominal pain such as Crohn's disease, or peptic ulcers. Capsule endoscopy can be used to diagnose problems in the small intestine, but unlike EGD or colonoscopy, it cannot treat pathology that may be discovered. Capsule endoscopy transfers the captured images wirelessly to an external receiver worn by the patient using one of the appropriate frequency bands. The collected images are then transferred to a computer for diagnosis, review, and display. A transmitted radio-frequency signal can be used to accurately estimate the location of the capsule and to track it in real time inside the body and gastrointestinal tract.

Linear Actuator

A linear actuator is an actuator that creates motion in a straight line, in contrast to the circular motion of a conventional electric motor. Linear actuators are used in machine tools and industrial machinery, in computer peripherals such as disk drives and printers, in valves and dampers, and in many other places where linear motion is required. Hydraulic or pneumatic cylinders inherently produce linear motion. Many other mechanisms are used to generate linear motion from a rotating motor.


Principles

In the majority of linear actuator designs, the basic principle of operation is that of an inclined plane. The threads of a lead screw act as a continuous ramp that allows a small rotational force to be used over a long distance to accomplish movement of a large load over a short distance.


Variations

Many variations on the basic design have been created. Most focus on providing general improvements such as higher mechanical efficiency, speed, or load capacity. There is also a large engineering movement towards actuator miniaturization.

Most electro-mechanical designs incorporate a lead screw and lead nut. Some use a ball screw and ball nut. In either case, the screw may be connected to a motor or manual control knob either directly or through a series of gears. Gears are typically used to allow a smaller (and weaker) motor spinning at a higher rpm to be geared down to provide the torque necessary to spin the screw under a heavier load than the motor would otherwise be capable of driving directions. Effectively this sacrifices actuator speed in favor of increased actuator thrust. In some applications, the use of worm gear is common as this allows a smaller built-in dimension still allowing great travel length.


Advantages

1. Cheap. Repeatable. No power source required. Self-contained. Identical behavior extending or retracting.

2. Cheap. Repeatable. An operation can be automated. Self-contained. Identical behavior extending or retracting. DC or stepping motors. Position feedback possible.

3. Simple design. Minimum of moving parts. High speeds possible. Self-contained. Identical behavior extending or retracting.

4. Very small motions possible.

5. Very high forces possible

6. Strong, light, simple, fast.

7. Very compact. The range of motion greater than the length of the actuator.


Disadvantages

1. Manual operation only. No automation.

2. Many moving parts prone to wear.

3. Low to medium force.

4. Consumes barely any power. Short travel unless amplified mechanically. High speeds speed. High voltages required, typically 24V or more. Expensive, and fragile. Good in compression only, not in tension. Typically used for Fuel Injectors

Paper battery

A paper battery is an ultra-thin electric battery engineered to use a spacer formed largely of cellulose (the major constituent of paper). It incorporates nanoscale structures to act as high surface-area electrodes to improve the conduction of electricity.

In addition to being ultra-thin, paper batteries are flexible and environmentally-friendly, allowing integration into a wide range of products. Their functioning is similar to conventional chemical batteries with the important difference that they are non-corrosive and do not require a bulky housing.

A paper battery is flexible, ultra-thin energy storage and production device formed by combining carbon nanotube s with a conventional sheet of cellulose-based paper. A paper battery acts as both a high-energy battery and supercapacitor, combining two components that are separate in traditional electronics. This combination allows the battery to provide both long-term, steady power production and bursts of energy. Non-toxic, flexible paper batteries have the potential to power the next generation of electronics, medical devices, and hybrid vehicles, allowing for radical new designs and medical technologies.

Paper batteries may be folded, cut or otherwise shaped for different applications without any loss of integrity or efficiency. Cutting one in half halves its energy production. Stacking them multiplies power output. Early prototypes of the device are able to produce 2.5-volt s of electricity from a sample the size of a postage stamp.


Development

The creation of this nanocomposite paper drew from a diverse pool of disciplines, requiring expertise in materials science, energy storage, and chemistry. In August 2007, a research team (led by Drs. Robert Linhardt; Pulickel Ajayan; and Omkaram Nalamasu) at Rensselaer Polytechnic Institute developed the paper battery. Victor Pushparaj, along with Shaijumon M. Manikoth, Ashavani Kumar, and Saravanababu Murugesan, were co-authors and lead researchers of the project. Other co-authors include Lijie Ci and Robert Vajtai.

This cellulose based spacer is compatible with many possible electrolytes. Researchers used ionic liquid, essentially a liquid salt, as the battery electrolyte, as well as naturally occurring electrolytes such as human sweat, blood, and urine. Use of an ionic liquid, containing no water, would mean that there would nothing in the batteries to freeze or evaporate, potentially allowing operation in extreme temperatures.


Durability

Paper batteries are alleged to look, feel and weigh the same as ordinary paper because its components are molecularly attached to each other: the carbon nanotubes print is embedded in the paper, and the electrolyte is soaked into the paper.


Uses

The paper-like quality of the battery combined with the structure of the nanotubes embedded within gives them their light weight and low cost, making them ideal for portable electronics, aircraft, automobiles, and toys (such as model aircraft), while their ability to use electrolytes in the blood make them potentially useful for medical devices such as pacemakers, medical diagnostic equipment, and drug delivery transdermal patches. A German healthcare company called KSW Microtech is already using the battery to power monitoring of the temperature of blood supplies.

The medical uses are particularly attractive because the batteries do not contain any toxic materials and can be biodegradable, unlike most chemical cells.

Paper battery technology can also be used in supercapacitors.

Smart antenna

Smart antennas (also known as adaptive array antennas, multiple antennas and, recently, MIMO) are antenna arrays with smart signal processing algorithms used to identify spatial signal signature such as the direction of arrival (DOA) of the signal, and use it to calculate beamforming vectors, to track and locate the antenna beam on the mobile/target. Smart antennas should not be confused with reconfigurable antennas, which have similar capabilities but are single element antennas and not an antenna arrays.

Smart antenna techniques are used notably in acoustic signal processing, track and scan radar, radio astronomy and radio telescopes, and mostly in cellular systems like W-CDMA and UMTS.

Smart antennas have two main functions: DOA estimation and Beamforming.

A smart antenna is a digital wireless communications antenna system that takes advantage of diversity effect at the source (transmitter), the destination (receiver), or both.

In conventional wireless communications, a single antenna is used at the source, and another single antenna is used at the destination. This is called SISO (single input, single output). Such systems are vulnerable to problems caused by multipath effects. When an electromagnetic field (EM field) is met with obstructions such as hills, canyons, buildings, and utility wires, the wavefronts are scattered, and thus they take many paths to reach the destination. The late arrival of scattered portions of the signal causes problems such as fading, cut-out (cliff effect), and intermittent reception (picket fencing). In a digital communications system like the Internet, it can cause a reduction in data speed and an increase in the number of errors. The use of smart antennas can reduce or eliminate the trouble caused by multipath wave propagation.

Smart antennas fall into three major categories: SIMO (single input, multiple outputs), MISO (multiple input, single output), and MIMO (multiple input, multiple output). In SIMO technology, one antenna is used at the source, and two or more antennas are used at the destination. In MISO technology, two or more antennas are used at the source, and one antenna is used at the destination. In MIMO technology, multiple antennas are employed at both the source and the destination. MIMO has attracted the most attention recently because it can not only eliminate the adverse effects of multipath propagation but in some cases can turn it into an advantage.


Types of smart antennas

Two of the main types of smart antennas include switched beam smart antennas and adaptive array smart antennas. Switched beam systems have several available fixed beam patterns. A decision is made as to which beam to access, at any given point in time, based upon the requirements of the system. Adaptive arrays allow the antenna to steer the beam to any direction of interest while simultaneously nulling interfering signals. Beam direction can be estimated using the so-called direction-of-arrival (DOA) estimation methods. In 2008, the United States NTIA began a major effort to assist consumers in the purchase of digital television converter boxes. Through this effort, many people have been exposed to the concept of smart antennas for the first time. In the context of consumer electronics, a smart antenna is one that conforms to the EIA/CEA-909 Standard Interface.

Spintronics

Spintronics (a portmanteau meaning spin transport electronics, also known as spin-electronics or Flextronics, is an emerging technology exploiting both the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices. Spintronics differs from the older magnetoelectronics, in that the spins are not only manipulated by magnetic fields, but also by electrical fields.

In order to develop spintronics technology, it is first necessary to fully explore potential materials and their properties; by obtaining a thorough understanding of spintronic phenomena we can effective utilize them to create spin-engineered materials and working devices. Spintronics is an emerging field of nanoscale electronics involving the detection and manipulation of electron spin.

Today microelectronic devices are based on controlling the charge of electrons, either by storing it or sending it flowing as current. However, electrical current is actually composed of two types of electrons, spin-up and spin-down electrons, which form two largely independent spin currents. In the past 15 years, there has been a revolution in our understanding of generating, manipulating and detecting spin-polarized electrical current which makes possible entirely new classes of spin-based sensor, memory and logic devices. This new field of science and technology is now commonly referred to as spintronics.


History

Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985) and the discovery of giant magnetoresistance independently by Albert Fert et al. The origins of spintronics can be traced back even further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990.

A particularly important class of spintronic materials are nano-engineered magnetic heterostructures (or multilayers) whose critical element is a sandwich of two ultra-thin magnetic layers separated by atomically thin non-magnetic conducting or insulating layers, forming what are called spin-valve or magnetic tunnel junction devices. Such sandwiches can exhibit giant changes in conductance when the magnetic orientation of the magnetic layers is changed. Spin-valve sensors were pioneered by Stuart Parkin at the Almaden Research Center in 1989-1991 and today are a key component of all magnetic hard-disk drives, which enabled their nearly 1,000-fold increase in capacity over the past 8 years. This means that today all information in the world can be stored in digital form and accessed remotely, effectively from any part of the world: the consequences have been enormous and one can truly make the case that spintronics has made possible today digital world.

At Almaden, we study a wide range of spintronic materials and devices both to discover new physical phenomena and for applications in novel sensor, memory and logic technologies.

Computer aided design

Computer-aided design (CAD) is the use of computer systems to assist in the creation, modification, analysis, or optimization of a design.  CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.  CAD output is often in the form of electronic files for print, machining, or other manufacturing operations.

Computer-aided the design is used in many fields. Its use in designing electronic systems is known as electronic design automation or EDA. In mechanical design, it is known as mechanical design automation (MDA) or computer-aided drafting (CAD), which includes the process of creating a technical drawing with the use of computer software.

CAD software for mechanical design uses either vector-based graphics to depict the objects of traditional drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions.

CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids in three-dimensional (3D) space.

CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design, prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising, and technical manuals, often called DCC digital content creation. The modern ubiquity and power of computers mean that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry.

The design of geometric models for object shapes, in particular, is occasionally called computer-aided geometric design (CAGD).


Uses

Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question.

CAD is one part of the whole Digital Product Development (DPD) activity within the Product Lifecycle Management (PLM) processes, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:

1. Computer-aided engineering (CAE) and Finite element analysis (FEA)

2. Computer-aided manufacturing (CAM) including instructions to Computer Numerical Control (CNC) machines

3. Photorealistic rendering

4. Document management and revision control using Product Data Management (PDM).


Technology

Originally software for Computer-Aided Design systems was developed with computer languages such as Fortran, ALGOL but with the advancement of object-oriented programming methods, this has radically changed. Typical modern parametric feature based modeler and freeform surface systems are built around a number of key C modules with their own APIs. A CAD system can be seen as built up from the interaction of a graphical user interface (GUI) with NURBS geometry and/or boundary representation (B-rep) data via a geometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.

Holography

Holography is a technique which enables three-dimensional images (holograms) to be made. It involves the use of a laser, interference, diffraction, light intensity recording and suitable illumination of the recording. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the image appear three - dimensional. The holographic recording itself is not an image; it consists of an apparently random structure of either varying intensity, density or profile.


Overview and history

The Hungarian-British physicist Dennis Gabor (in Hungarian: Gabor Denes), was awarded the Nobel Prize in Physics in 1971 for his invention and development of the holographic method. His work, done in the late 1940s, built on pioneering work in the field of X-ray microscopy by other scientists including Mieczyslaw Wolfke in 1920 and WL Bragg in 1939. The discovery was an unexpected result of research into improving electron microscopes at the British Thomson-Houston (BTH) Company in Rugby, England, and the company filed a patent in December 1947 (patent GB685286). The technique was originally invented is still used in electron microscopy, where it is known as electron holography, but optical holography did not really advance until the development of the laser in 1960. The word holography comes from the graph, writing or drawing.

The development of the laser enabled the first practical optical holograms that recorded 3D objects to be made in 1962 by Yuri Denisyuk in the Soviet Union and by Emmett Leith and Juris Upatnieks at the University of Michigan, USA. Early holograms used silver halide photographic emulsions as the recording medium. They were not very efficient as the produced grating absorbed much of the incident light. Various methods of converting the variation in transmission to a variation in refractive index (known as bleaching) were developed which enabled much more efficient holograms to be produced.

Several types of holograms can be made. Transmission holograms, such as those produced by Leith and Upatnieks, are viewed by shining laser light through them and looking at the reconstructed image from the side of the hologram opposite the source. A later refinement, the rainbow transmission hologram, allows more convenient illumination by white light rather than by lasers.  Rainbow holograms are commonly used for security and authentication, for example, on credit cards and product packaging.

Another kind of common hologram, the reflection or Denisyuk hologram, can also be viewed using a white-light illumination source on the same side of the hologram as the viewer and is the type of hologram normally seen in holographic displays. They are also capable of multicolor - image reproduction.


How holography work?

Holography is a technique that enables a light field, which is generally the product of a light source scattered off objects, to be recorded and later reconstructed when the original light field is no longer present, due to the absence of the original objects. Holography can be thought of as somewhat similar to sound recording, whereby a sound field created by vibrating matter like musical instruments or vocal cords, is encoded in such a way that it can be reproduced later, without the presence of the original vibrating matter.

WiTricity

WiTricity is an American engineering company that manufactures devices for wireless energy transfer using resonant energy transfer based on Synchronized Magnetic - flux Phase Coupling phenomenon (oscillating magnetic fields).


History

The term WiTricity was used for a project that took place at MIT, led by Marin Soljacic in 2007. The MIT researchers successfully demonstrated the ability to power a 60 watt light bulb wirelessly, using two 5-turn copper coils of 60 cm (24 in) diameter, that were 2 m (7 ft) away, at roughly 45 percent efficiency. The coils were designed to resonate together at 9.9 MHz (wavelength 30 m) and were oriented along the same axis. One was connected inductively to a power source, and the other one to a bulb. The setup powered the bulb on, even when the direct line of sight was blocked using a wooden panel. Researchers were able to power a 60 watt light bulb at roughly 90 percent efficiency at a distance of 3 feet. The research project was spun off into a private company, also called WiTricity.

The emerging technology was demonstrated in July 2009 by CEO Eric Giler at the TED Global Conference held in Oxford. In this demonstration, Giler shows a WiTricity power unit powering a television as well as three different cell phones, the initial problem that inspired Soljacic to get involved with the project. Automobile manufacturer Toyota made an investment in WiTricity in April 2011.

In September 2012, the company announced it would make a USD 1000 demonstration kit available to interested parties, to promote the development of commercial applications.


Technology

WiTricity is based on strong coupling between electromagnetic resonant objects to transfer energy wirelessly between them. This differs from other methods like simple induction, microwaves, or air ionization. The system consists of transmitters and receivers that contain magnetic loop antennas critically tuned to the same frequency. Because WiTricity devices operate in the electromagnetic near field, receiving devices must be no more than about a quarter wavelength from the transmitter. In the system demonstrated in the 2007 paper, this was only a few meters at the frequency chosen. In their first paper, the group also simulated GHz dielectric resonators. WiTricity devices are coupled almost entirely with magnetic fields (the electric fields are largely confined within capacitors inside the devices), which they argue makes them safer than resonant energy transfer using electric fields (most famously in Tesla coils, whose high electric fields can generate lightning), since most materials couple weakly to magnetic fields.

Unlike the far field wireless power transmission systems based on traveling electromagnetic waves, WiTricity employs near field resonant inductive coupling through magnetic fields similar to those found in transformers except that the primary coil and secondary winding are physically separated, and tuned to resonate to increase their magnetic coupling. These tuned magnetic fields generated by the primary coil can be arranged to interact vigorously with matched secondary windings in distant equipment but far more weakly with any surrounding objects or materials such as radio signals or biological tissue.

In particular, WiTricity is based on using strongly-coupled resonances to achieve high power-transmission efficiency. Aristeidis Karalis, referring to the team's experimental demonstration, says that the usual non-resonant magnetic induction would be almost 1 million times less efficient in this particular system.

Saturday, 5 January 2019

Nanotechnology

Nanotechnology is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. It is therefore common to see the plural form nanotechnologies as well as nanoscale technologies to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars. The European Union has invested 1.2 billion and Japan 750 million dollars.

Nanotechnology, as defined by size, is naturally very broad, including fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, etc. The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.

Scientists currently, debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in medicine, electronics, biomaterials, and energy production. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.


Origin

The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard Feynman in his talk There Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms. The term nano-technology was first used by Norio Taniguchi in 1974, though it was not widely known.


Current Research

Nanomaterial - The nanomaterials field includes subfields which develop or study materials having unique properties arising from their nanoscale dimensions.

Interface and colloid science have given rise to many materials which may be useful in nanotechnologies, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related also to nanoionics and nanoelectronics.

Nanoscale materials can also be used for bulk applications; most present commercial applications of nanotechnology are of this flavor.

Progress has been made in using these materials for medical applications; see Nanomedicine.

Nanoscale materials such as nanopillars are sometimes used in solar cells which combat the cost of traditional Silicon solar cells.

Development of applications incorporating semiconductor nanoparticles to be used in the next generation of products, such as display technology, lighting, solar cells, and biological imaging; see quantum dots.


Applications

As of August 21, 2008, the Project on Emerging Nanotechnologies estimates that over 800 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3-4 per week.

Further applications allow tennis balls to last longer, golf balls to fly straighter, and even bowling balls to become more durable and have a harder surface. Trousers and socks have been infused with nanotechnology so that they will last longer and keep people cool in the summer. Bandages are being infused with silver nanoparticles to heal cuts faster.

World first green helicopter

E-volos Volocopter is a revolution in aviation Made in Germany. Safer, simpler, and cleaner than normal helicopters, it has a unique way of moving a groundbreaking innovation. The Volocopter is an environmentally friendly and emission-free private helicopter. Instead of one combustion engine, eighteen electrically driven rotors propel it.

The maiden flight and first test flights were conducted in the dm-arena in Karlsruhe with the prototype of the 2-person VC200 on Sunday, November 17, 2013. Based on this model, it will be prepared for series production in the coming years. There are already numerous requests for the Volocopter from around the world, said Alexander Zosel, managing director of e-volo.

With multiple flights lasting several minutes reaching the nearly 22 m high ceiling of the dm-arena, including a number of smooth takeoffs and landings, the Volocopter concept exceeded all expectations. Rich and incredibly quiet sound, absolutely no noticeable vibrations in the flight, a convincing structure with a great, new spring strut landing gear, and an extremely calm rotor plane, concluded the e-volo managing director, thanking the KMK. New innovations that have the possibility to change our world are continually presented at the Messe Karlsruhe. Therefore it was natural to work in partnership with the e-volo team to enable the test flights in the dm-arena, announced KMK managing director Britta Wirtz. The fair is not just a display of strengths in the technology field, but concretely supports pioneers of aviation as well.

The developing team of e-volo knew from the onset that the Volocopter was very easy to fly. Due to elaborate simulations at the Stuttgart University, they already knew that it was much quieter than a helicopter. However, the pleasant low, rich sound and the lower-than-expected noise level caused great cheering among the e-volo team during the first flights.

Windowless planes

Display screens projecting the sky outside could line the cabin of aircraft if windowless planes become reality.

A UK aerospace firm has released images of its windowless plane concept in which display screens show the environment outside the plane as well as films and video conferencing.

Windowless planes could revolutionize air travel as airlines seek to reduce their spending on fuel and new supersonic aircraft are developed.

A French design agency released renderings in August showing its proposed design for a private jet completely devoid of windows in its fuselage.

Technicon Design says that removing windows from aircraft will reduce their weight, thus reducing fuel and maintenance costs and giving designers greater opportunities to enhance and beautify their interiors.

Gareth Davies, the chief designer at Technicon Design, the company behind the concept, said: Certain elements are already possible...such as the flexible displays.

The idea is to push the boundaries.

He added that future technology would hopefully allow people to display whatever images they wanted, the content only being limited by your imagination.

For now, though, the technology remains only in the reach of the super-rich, but with the world first commercial windowless plane already in the pipeline, it may be only a matter of time before the concept is adapted.

And actually, any scene could be displayed on the interior. Let say the view is mostly clouds or ocean. How about displaying a rainforest? A flight through the Grand Canyon? A trip to the Moon?

Solar panels on the exterior would help power the displays.

Removing windows has its advantages, too. It reduces the materials and cost needed as well as reducing the weight of the plane. Not having windows allows for greater flexibility of the interior design of the aircraft, too.

Electrooculography

Electrooculography (EOG/E.O.G.) is a technique for measuring the corneo-retinal standing potential that exists between the front and the back of the human eye. The resulting signal is called the electrooculogram. Primary applications are in ophthalmological diagnosis and in recording eye movements. Unlike the electroretinogram, the EOG does not measure response to individual visual stimuli.

To measure eye movement, pairs of electrodes are typically placed either above and below the eye or to the left and right of the eye. If the eye moves from center position toward one of the two electrodes, this electrode sees the positive side of the retina and the opposite electrode sees the negative side of the retina. Consequently, a potential difference occurs between the electrodes. Assuming that the resting potential is constant, the recorded potential is a measure of the eyes position.


Principle

The eye acts as a dipole in which the anterior pole is positive and the posterior pole is negative.

1. Left gaze: the cornea approaches the electrode near the outer canthus of the left eye, resulting in a negative-trending change in the recorded potential difference. 

2. Right gaze: the cornea approaches the electrode near the inner canthus of the left eye, resulting in a positive - trending change in the recorded potential difference.

Electrooculography was used by Robert Zemeckis and Jerome Chen, the visual effects supervisor in the movie Beowulf, to enhance the performance capture by correctly animating the eye movements of the actors. The result was an improvement over the technique used for the film The Polar Express. Electrooculography, in addition to other neuromuscular signals, is used as a computer input method by the Neural Impulse Actuator.

X Gene ARMv8 processors

In 2010, Applied Micro made a strategic decision; courageous or foolhardy, depending upon where you sat in the communications processor world. That decision was to sign an architectural license for ARMs 64bit v8 processor cores and to embark on the design of a multicore device.

The company incoming CEO at the time perceived the need to rethink the architecture of server processors in order to achieve a better balance between power and performance and to make such processors more attractive economically. In itself, the decision to follow the ARM route was not an issue; the interesting factor was that ARM had still to finalize the v8 architecture. It was, to a certain extent, a case of the chicken arriving before the egg. Early involvement allowed Applied Micro to help ARM to complete the v8 specification and to write more than 20,000 instruction set verification tests.

Spin forward to 2014 and the fruits of Applied Micro labors can be seen in the shape of X - Gene, although the development process passed an interim stage in which an FPGA based version of the processor was created. The reason for the interest in the v8 architecture- not only from the supply side but also from developers- is the need to reduce the amount of power consumed by server farms and the like. Instead of chasing raw performance, data center developers are now looking to maximize parameters such as performance per Watt, as well as getting the most performance for their money.

Gaurav Singh, Applied Micros VP of engineering and product development, said: X-Gene is an ARMv8 compatible design. Its a high-performance CPU compared with existing ARM designs and we are looking for the device to provide 80 to 90% of the performance that operators would expect from a high-end Intel Xeon processor. Singh described X-Gene. It is a four-issue out of order CPU running at 2.4GHz. It has a range of high-end features and includes performance optimization for hypervisors.

Three dimensional Integrated Circuit

In electronics, a three-dimensional integrated circuit (3D IC) is a chip in which two or more layers of active electronic components are integrated both vertically and horizontally into a single circuit. The semiconductor industry is pursuing this technology in many different forms, but it is not yet widely used; consequently, the definition is still somewhat fluid.

3D packaging saves space by stacking separate chips in a single package. This packaging, known as System in Package (SiP) or Chip Stack MCM, does not integrate the chips into a single circuit. The chips in the package communicate using off-chip signaling, much as if they were mounted in separate packages on a normal circuit board.

In 2004, Intel presented a 3D version of the Pentium 4 CPU. The chip was manufactured with two dies using face-to-face stacking, which allowed a dense via structure. Backside TSVs are used for I/O and power supply. For the 3D floorplan, designers manually arranged functional blocks in each die aiming for power reduction and performance improvement. Splitting large and high-power blocks and careful rearrangement allowed to limit thermal hotspots. The 3D design provides 15% performance improvement (due to eliminated pipeline stages) and 15% power saving (due to eliminated repeaters and reduced wiring) compared to the 2D Pentium 4.

The Teraflops Research Chip introduced in 2007 by Intel is an experimental 80 - core design with stacked memory. Due to the high demand for memory bandwidth, a traditional I/O approach would consume 10 to 25 W. To improve upon that, Intel designers implemented a TSV-based memory bus. Each core is connected to one memory tile in the SRAM die with a link that provides 12 GB/s bandwidth, resulting in a total bandwidth of 1 TB/s while consuming only 2.2 W.

An academic implementation of a 3D processor was presented in 2008 at the University of Rochester by Professor Eby Friedman and his students. The chip runs at a 1.4 GHz and it was designed for optimized vertical processing between the stacked chips which gives the 3D processor abilities that the traditional one layered chip could not reach. One challenge in the manufacturing of the three-dimensional chip was to make all of the layers work in harmony without any obstacles that would interfere with a piece of information traveling from one layer to another.

In ISSCC 2012, two 3D - IC-based multi-core designs using GlobalFoundries 130 nm process and Tezzazon FaStack technology were presented and demonstrated. 3D-MAPS, a 64 custom core implementation with two-logic-die stack was demonstrated by researchers from the School of Electrical and Computer Engineering at Georgia Institute of Technology. The second prototype was from the Department of Electrical Engineering and Computer Science at the University of Michigan called Centip3De, a near-threshold design based on ARM Cortex-M3 cores.


Advantages

Expanded memory capacity, Load reduction, Higher frequency, Bus turn around time will reduce (Improved bus efficiency), Active termination power reduction and Lower power consumption.