google search engine

Google
 

Mind-Reading Computer

Abstract

Drawing inspiration from psychology, computer vision and machine learning, the team in the Computer Laboratory at the University of Cambridge has developed mind-reading machines - computers that implement a computational model of mind-reading to infer mental states of people from their facial signals. The goal is to enhance human-computer interaction through empathic responses, to improve the productivity of the user and to enable applications to initiate interactions with and on behalf of the user, without waiting for explicit input from that user.

Description of Mind-Reading Computer

 

 

Using a digital video camera, the mind-reading computer system analyzes a person's facial expressions in real time and infers that person's underlying mental state, such as whether he or she is agreeing or disagreeing, interested or bored, thinking or confused.
Prior knowledge of how particular mental states are expressed in the face is combined with analysis of facial expressions and head gestures occurring in real time. The model represents these at different granularities, starting with face and head movements and building those in time and in space to form a clearer model of what mental state is being represented. Software from Nevenvision identifies 24 feature points on the face and tracks them in real time. Movement, shape and colour are then analyzed to identify gestures like a smile or eyebrows being raised. Combinations of these occurring over time indicate mental states. For example, a combination of a head nod, with a smile and eyebrows raised might mean interest. The relationship between observable head and facial displays and the corresponding hidden mental states over time is modeled using Dynamic Bayesian Networks.
Why mind reading?
The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands. Imagine a future where we are surrounded with mobile phones, cars and online services that can read our minds and react to our moods. How would that change our use of technology and our lives? We are working with a major car manufacturer to implement this system in cars to detect driver mental states such as drowsiness, distraction and anger.
Current projects in Cambridge are considering further inputs such as body posture and gestures to improve the inference. We can then use the same models to control the animation of cartoon avatars. We are also looking at the use of mind-reading to support on-line shopping and learning systems.
The mind-reading computer system may also be used to monitor and suggest improvements in human- human interaction. The Affective Computing Group at the MIT Media Laboratory is developing an emotional-social intelligence prosthesis that explores new technologies to augment and improve people's social interactions and communication skills.

Web search

For the first test of the sensors, scientists trained the software program to recognize six words - including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors silently said the words to themselves and the software correctly picked up the signals 92 per cent of the time.

Then researchers put the letters of the alphabet into a matrix with each column and row labeled with a single-digit number. In that way, each letter was represented by a unique pair of number co-ordinates. These were used to silently spell "NASA" into a web search engine using the program.

SCADA

Abstract

SCADA stands for Supervisory Control And Data Acquisition. SCADA refers to a system that collects data from various sensors at a factory, plant or in other remote locations and then sends this data to a central computer which then manages and controls the data. SCADA focuses on gathering and circulating the right amount of system information to the right person or computer within the right amount of time so that creative solutions are made possible.

Description of SCADA

 

 

The keyword supervisory indicates that decisions are not directly made by the system. Instead, the system executes control decisions based on control parameters entered by the agency staff. The system monitors the health of the process and generates alarm notifications when conditions are out of tolerance. It is also tasked with placing the process in a safe mode. It waits for user inputs to correct problems. The supervisory mode is designed to operate the system in a manner that avoids out of tolerance conditions. In a water / wastewater process, pumps are started and stopped by the system according to limits assigned by operations. As long as the system responds correctly to the control commands, the system remains in control. It includes three processes.
. Industrial processes include those of manufacturing, production, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.
. Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, civil defense siren systems, and large communication systems.
. Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control energy consumption.
SCADA can be used to monitor and control plant or equipment. The control may be automatic, or initiated by operator commands. The data acquisition is accomplished firstly by the RTU's (remote Terminal Units) scanning the field inputs connected to the RTU (RTU's may also be called a PLC - programmable logic controller). This is usually at a fast rate. The central host will scan the RTU's (usually at a slower rate.) The data is processed to detect alarm conditions, and if an alarm is present, it will be displayed on special alarm lists. Data can be of three main types. Analogue data (i.e. real numbers) will be trended (i.e. placed in graphs). Digital data (on/off) may have alarms attached to one state or the other. Pulse data (e.g. counting revolutions of a meter) is normally accumulated or counted.
These systems are used not only in industrial processes. For example, Manufacturing, steel making, power generation both in conventional, nuclear and its distribution, chemistry, but also in some experimental facilities such as laboratories research, testing and evaluation centers, nuclear fusion. The size of such plants can range from as few as 10 to several 10 thousands input/output (I/O) channels. However, SCADA systems evolve rapidly and are now penetrating the market of plants with a number of I/O channels of several 100K.

FRAM

Abstract

Before the 1950's, ferromagnetic cores were the only type of random-access, nonvolatile memories available. A core memory is a regular array of tiny magnetic cores that can be magnetized in one of two opposite directions, making it possible to store binary data in the form of a magnetic field. The success of the core memory was due to a simple architecture that resulted in a relatively dense array of cells. This approach was emulated in the semiconductor memories of today (DRAM's, EEPROM's, and FRAM's). Ferromagnetic cores, however, were too bulky and expensive compared to the smaller, low-power semiconductor memories. In place of ferromagnetic cores ferroelectric memories are a good substitute.

Description of FRAM

The term "ferroelectric' indicates the similarity, despite the lack of iron in the materials themselves.Ferroelectric memory exhibit short programming time, low power consumption and nonvolatile memory, making highly suitable for application like contact less smart card, digital cameras which demanding many memory write operations.
In other word FRAM has the feature of both RAM and ROM. A ferroelectric memory technology consists of a complementry metal-oxide-semiconductor (CMOS) technology with added layers on top for ferroelectric capacitors. A ferroelectric memory cell has at least one ferroelectric capacitor to store the binary data, and one or two transistors that provide access to the capacitor or amplify its content for a read operation.A ferroelectric capacitor is different from a regular capacitor in that it substitutes the dielectric with a ferroelectric material (lead zirconate titanate (PZT) is a common material used)-when an electric field is applied and the charges displace from their original position spontaneous polarization occurs and displacement becomes evident in the crystal structure of the material. Importantly, the displacement does not disappear in the absence of the electric field. Moreover, the direction of polarization can be reversed or reoriented by applying an appropriate electric field.

FERROELECTRIC CAPACITOR

The basic building block of FRAM is the Ferroelectric capacitor. A ferroelectric capacitor physically distinguished from a regular capacitor by substituting the dielectric with a ferroelectric material. In a regular dielectric, upon the application of an electric field, positive and negative charges will be displaced from their original positions-a concept that is characterized polarization. This polarization, or displacement, will vanish, however, when the electric field return back to zero. In, a ferroelectric material, on the other hand, there is a spontaneous polarization displacement that is inherent to the crystal structure of the material and does not disappear in the absence of electric field. The direction of this polarization can be reversed or reoriented by applying an appropriate electric field.

BITLINE-PARALLEL PLATELINE
Fig.shows an array architecture in which the PL is run parallel to the BL, hence the name BL//PL for the architecture. Unlike the previous architecture, only a single memory cell can be selected by a simultaneous activation of a WL and a PL. This is the memory cell that is located at the iIntersection of the WL and the PL. It is possible to select more than one memory cell in a row by activating their corresponding platelines.

 

Itanium Processor

Abstract

The Itanium brand extends Intel's reach into the highest level of computing enabling powerful servers and high- performance workstations to address the increasing demands that the internet economy places on e-business.

Description of Itanium Processor

 The Itanium architecture is a unique combination of innovative features, such as explicit parallelism, predication, speculation and much more. In addition to providing much more memory that today's 32-bit designs, the 64-bit architecture changes the way the processor hardware interacts with code. The Itanium is geared toward increasingly power-hungry applications like e-commerce security, computer-aided design and scientific modeling.

Intel said the Itanium provides a 12-fold performance improvement over today's 32-bit designs. Its "Explicitly Parallel Instruction Computing"(EPIC) technology enables it to handle parallel processing differently than previous architectures, most of which were designed 10 to 20 years ago. The technology reduces hardware complexity to better enable processor speed upgrades. Itanium processors contain "massive chip execution resources", that allow "breakthrough capabilities in processing terabytes of data". Itanium is the first processor to use EPIC(Explicit Parallel Instruction Computing) architecture.Its performance is to be better than the present day Reduced Instruction Set Computing and Complex Instruction Set Computing(RISC & CISC).

In modern Processors,including Itanium,a multiplicity of arithmetic-logic or floating-point on-chip units execute several instructions in parallel.Ideally,increasing the number of execution units should increas the number of extra instructions per clock cycle proportionally.But conventional processors also needs a lot of extra on-chip circuitry to schedule and track the instruction progress,which takes up valuable space,consumes power and add steps to the execution process.As a result only a slight improvement in the number of instructions per clock cycle occurs when the number of execution units are increased.Instead EPIC architects use a compiler to schedule instructions.

MANET

Abstract

In the recent years communication technology and services have advanced. Mobility has become very important, as people want to communicate anytime from and to anywhere. In the areas where there is little or no infrastructure is available or the existing wireless infrastructure is expensive and inconvenient to use, Mobile Ad hoc Networks, called MANETs, are becoming useful. They are going to become integral part of next generation mobile services.

Description of MANET

 

A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. The special features of MANET bring this technology great opportunity together with severe challenges.The military tactical and other security-sensitive operations are still the main applications of ad hoc networks, although there is a trend to adopt ad hoc networks for commercial uses due to their unique properties. However, they face a number of problems.
Some of the technical challenges MANET poses are also presented based on which the paper points out the related kernel barrier. Some of the key research issues for ad hoc networking technology are discussed in detail that are expected to promote the development and accelerate the commercial applications of the MANET technology.During the last decade, advances in both hardware and software techniques have resulted in mobile hosts and wireless networking common and miscellaneous. Generally there are two distinct approaches for enabling wireless mobile units to communicate with each other.

Infrastructureless Approach

As to infrastructureless approach, the mobile wireless network is commonly known as a mobile ad hoc network (MANET) [1, 2]. A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. It has many important applications, because in many contexts information exchange between mobile units cannot rely on any fixed network infrastructure, but on rapid configuration of a wireless connections on-the-fly. Wireless ad hoc networks themselves are an independent, wide area of research and applications, instead of being only just a complement of the cellular system.

In this paper, we describes the fundamental problems of ad hoc networking by giving its related research background including the concept, features, status, and applications of MANET. Some of the technical challenges MANET poses are also presented based on which the paper points out the related kernel barrier. Some of the key research issues for adhoc networking technology are discussed in detail that are expected to promote the development and accelerate the commercial applications of the MANET technology.

MANET Applications

With the increase of portable devices as well as progress in wireless communication, ad hoc networking is gaining importance with the increasing number of widespread applications. Ad hoc networking can be applied anywhere where there is little or no communication infrastructure or the existing infrastructure is expensive or inconvenient to use. Ad hoc networking allows the devices to maintain connections to the network as well as easily adding and removing devices to and from the network. The set of applications for MANETs is diverse, ranging from large-scale, mobile, highly dynamic networks, to small, static networks that are constrained by power sources.

Palm Vein Technology

Abstract

An individual first rests his wrist, and on some devices, the middle of his fingers, on the sensor's supports such that the palm is held centimeters above the device's scanner, which flashes a near-infrared ray on the palm. Unlike the skin, through which near-infrared light passes, deoxygenated hemoglobin in the blood flowing through the veins absorbs near-infrared rays, illuminating the hemoglobin, causing it to be visible to the scanner.

Description of Palm Vein Technology

 

Arteries and capillaries, whose blood contains oxygenated hemoglobin, which does not absorb near-infrared light, are invisible to the sensor. The still image captured by the camera, which photographs in the near-infrared range, appears as a black network, reflecting the palm's vein pattern against the lighter background of the palm.
An individual's palm vein image is converted by algorithms into data points, which is then compressed, encrypted, and stored by the software and registered along with the other details in his profile as a reference for future comparison. Then, each time a person logs in attempting to gain access by a palm scan to a particular bank account or secured entryway, etc., the newly captured image is likewise processed and compared to the registered one or to the bank of stored files for verification, all in a period of seconds. Numbers and positions of veins and their crossing points are all compared and, depending on verification, the person is either granted or denied access.
Contact less Palm Vein Authentication Device:
The completely contactless feature of this Device makes it suitable for use where high levels of hygiene are required It also eliminates any hesitation people might have about coming into contact with something that other people have already touched.
In addition to being contactless and thereby hygienic and user-friendly in that the user does not need to physically touch a surface and is free of such hygiene concerns, palm vein authentication is highly secure in that the veins are internal to the body and carry a wealth of information, thereby being extremely difficult to forge.
What happens if the registered palm gets damaged?
There may be a chance that the palm we had registered may get damaged then we cannot use this technology, so during the time of registration we take the veins of both the hands so that if one gets damaged we can access through the second hand. When hand get damaged up to large extent we can get veins because deeper into the hand veins are obtained. When we apply this method we can maintain complete privacy.

Virtual Private Network

Abstract

VPNs have emerged as the key technology for achieving security over the Internet. While a VPN is an inherently simple concept, early VPN solutions were geared towards large organizations and their implementation required extensive technical expertise. As a consequence, small and medium-sized businesses were left out of the e-revolution. Recently, VPN solutions have become available that focus specifically on the needs of small and medium-sized businesses.

Description of Virtual Private Network

Historically, the term VPN has also been used in contexts other than the Internet, such as in the public telephone network and in the Frame Relay network. In the early days of the Internet-based VPNs, they were sometimes described as Internet-VPNs or IP-VPNs. However, that usage is archaic and VPNs are now synonymous with Internet-VPNs.A firewall is an important security feature for Internet users. A firewall prevents data from leaving and entering an enterprise by unauthorized users. However, when packets pass through the firewall to the Internet, sensitive data such as user names, passwords, account numbers, financial and personal medical information, server addresses, etc. is visible to hackers and to potential e-criminals. Firewalls do not protect from threats within the Internet.
This is where a VPN comes into play.A VPN, at its core, is a fairly simple concept-the ability to use the shared, public Internet in a secure manner as if it were a private network. the flow of data between two users over the Internet when not using a VPN. As shown by the dotted lines, packets between a pair of users may go over networks run by many ISPs and may take different paths. The structure of the Internet and the different paths taken by packets are transparent to the two users. With a VPN, users encrypt their data and their identities to prevent unauthorized people or computers from looking at the data or from tampering with the data.A VPN can be used for just about any intranet and e-business (extranet) application. Examples on the following pages illustrate the use and benefits of VPN for mobile users and for remote access to enterpriseresources, for communications between remote offices and headquarters, and for extranet/e-business.

 

Freenet

Abstract

Networked computer systems are rapidly growing in importance as the medium of choice for the storage and exchange of information. However, current systems afford little privacy to their users, and typically store any given data item in only one or a few fixed places, creating a central point of failure. Because of a continued desire among individuals to protect the privacy of their authorship or readership of various types of sensitive information, and the undesirability of central points of failure which can be attacked by opponents wishing to remove data from the system or simply overloaded by too much interest, systems offering greater security and reliability are needed.Freenet is being developed as a distributed information storage and retrieval system designed to address these concerns of privacy and availability.

Description of Freenet

Freenet is implemented as an adaptive peer-to-peer network of nodes that query one another to store and retrieve data files, which are named by location-independent keys. Each node maintains its own local datastore which it makes available to the network for reading and writing, as well as a dynamic routing table containing addresses of other nodes and the keys that they are thought to hold. It is intended that most users of the system will run nodes, both to provide security guarantees against inadvertently using a hostile foreign node and to increase the storage capacity available to the network as a whole. The system can be regarded as a cooperative distributed filesystem incorporating location independence and transparent lazy replication.

Freenet enables users to share unused disk space being directly useful to users themselves, acting as an extension to there own hard drives. The basic model is that requests for keys are passed along from node to node through a chain of proxy requests in which each node makes a local decision about where to send the request next, in the style of IP (Internet Protocol) routing.
We have no doubt that Freenet will be used in that way, just as the World Wide Web is currently being used in that way. While we may not like these things, still we are creating a system permitting it, as these losses arrive along with a greater good.That greater good is the destruction of censorship. Before the Internet, censorship was just accepted. Indeed, in some ways, it was a requirement, such as when a book publisher could only agree to publish so many books, so they were more likely to go with a book expressing popular views then one that wasn't. With the arrival of the Internet, that is no longer necessary.

But many are comfortable simply extending the pre-internet-era intellectual property system onto the internet, in spite of the shift the world's information environment has undergone over the past century. The network judges information based on popularity. If humanity is very interested in pornography, then pornography will be a big part of the Freenet.Eventually it comes down to what you think is more important: Free Speech or catching immoral activities. The Freenet developers say Free Speech is more important. This quote from Benjamin Franklin might be in order:
"Those who would give up liberty in the interest of security deserve, and will have, neither."

 

Google Glass

Abstract

The emergence of Google Glass, a prototype for a transparent Heads-Up Display (HUD) worn over one eye, is significant. It is the first conceptualization of a mainstream augmented reality wearable eye display by a large company. This paper argues that Glass’s birth is not only a marketing phenomenon heralding a technical prototype, it also argues and speculates that Glass’s popularization is an instigator for the adoption of a new paradigm in human-computer interaction, the wearable eye display. Google Glass is deliberately framed in media as the brainchild of Google co-founder Sergey Brin. Glass’s process of adoption operates in the context of mainstream and popular culture discourses, such as the Batman myth, a phenomenon that warrants attention.
Project Glass is a research and development program by Google to develop an augmented reality Head-Mounted Display (HMD). The intended purpose of Project Glass products would be the hands-free displaying of information currently available to most smartphone users, and allowing for interaction with the Internet via natural language voice commands. These glasses will have the combined features of virtual reality and augmented reality. Google glasses are basically wearable computers that will use the same Android software that powers Android smartphones and tablets. Google Glass is as futuristic a gadget we’ve seen in recent times. A useful technology for all kinds of people including handicapped/disabled.

Description of Google Glass

Google Glass is a prototype for an augmented reality, heads-up display developed by Google X lab slated to run on the Android operating system (see Figure 1). Augmented reality involves technology that augments the real world with a virtual component . The first appearance of Glass was on Sergey Brin who wore it to an April 5, 2012 public event in San Francisco. Provocative headlines emerged such as “Google ‘Project Glass’ Replaces the Smartphone with Glasses” and “Google X Labs: First Project Glass, next space elevators?” . A groundswell of anticipation surrounds Glass because it implies a revolutionary transition to a new platform, even though release for developers is only planned for 2013. At the time of our writing this paper, it is not available for consumers who can only see it in promotional materials.

Heads-up eye displays are not new. The Land Warrior system, developed by the U.S. army over the past decade, for example, includes a heads-up eye display with an augmented reality visual overlay for soldier communication. Many well-known inventors have contributed eye display technology, research or applications over the past two decades including Steve Mann (Visual Memory Prosthetic), Thad Starner (Remembrance Agent), and Rob Spence (Eyeborg). Commercially, Vuzix is a company that currently manufactures transparent eye displays.
Science fiction and popular references to the eye display are almost too numerous to list, but most are featured in military uses: Arnold Schwarzenegger’s Terminator from the 1984 film had an integrated head’s up display that identified possible targets, Tom Cruise’s Maverick in Top Gun had a rudimentary display to indicate an enemy plane’s target acquisition and current G-forces, and Bungie’s landmark video game series Halo features a head’s up display that gives the player real-time status updates on player enemy locations, shield levels, remaining ammunition and waypoint information. In most popular culture uses, a head’s up display is transparently overlaid upon the real world. However, in video games, the display is considered to be part of the entire game interface. While many film and television shows are adding HUDs to their storytelling to add a science fiction or futuristic feel, there is a movement in game development away from any artificial HUDs as many consider them to be “screen clutter” and block a player’s view of a created world. The video game Dead Space by Electronic Arts is an exemplar of this new style: traditional game information such as health and ammunition have been woven into character design, allowing for an unobstructed view.
How it Works?
The device will probably communicate with mobile phones through Wi-Fi and display contents on the video screen as well as respond to the voice commands of the user. Google put together a short video demonstrating the features and apps of Google glasses. It mainly concentrates on the social networking, navigation and communication. The video camera senses the environment and recognizes the objects and people around. The whole working of the Google glasses depends upon the user voice commands itself.
Sergey Brin has been loosely associated with Batman since the fall of 2011, setting persuasive discursive grounds for actions that Google takes. A compelling character in the narrative that charts this technology’s emergence, the name “Sergey Brin” appears 713 times in the corpus of 1,000 print and online news articles about Google Glass. Often the story concentrates on Brin’s activities, comments, whereabouts, and future expectations amid news of a technology that only exists as an artifact of the press for the public. Rupert Till explains the definition of how an individual must amass popular fame in order to form a “cult of personality”: A celebrity is someone who is well known for being famous, and whose name alone is recognizable, associated with their image, and is capable of generating money. . . For a star to progress to a point where they are described as a popular icon requires their achievement of a level of fame at which they are treated with the sort of respect traditionally reserved for religious figures. In order to be described as a popular icon, a star has to become a religious figure, to develop their own personality cult and recruit followers.
Benefits
1.Easy to wear and use.
2.Sensitive and responsive to the presence of people.
3.Fast access of maps, documents, videos, chats and much more.
4.A new trend for fashion lovers together being an innovative technology.

 

Mobile IPTV

Abstract

An IPTV signal is a stream of data packets traveling across the Web.Internet TV is relatively new -- there are lots of different ways to get it, and quality, content and costs can vary greatly. Shows can be high-quality, professionally produced material, while others might remind you of Wayne and Garth broadcasting "Wayne's World" from their basement. Traditional TV networks are also easing into the technology and experimenting with different formats.
Internet TV, in simple terms, is video and audio delivered over an Internet connection. It's also known as Internet protocol television, or IPTV. You can watch Internet TV on a computer screen, a television screen (through a set-top box) or a mobile device like a cell phone or an iPod.

It's almost the same as getting television through an antenna or a series of cable wires -- the difference is that information is sent over the Internet as data. At the same time, you can find even more variety on Internet TV than cable TV. Along with many of the same shows you find on the big networks, many Web sites offer independently produced programs targeted toward people with specific interests.
.

Description of Mobile IPTV

 

How IPTV Works

There are two things that make Internet TV possible. The first is bandwidth. To understand bandwidth, it's best to think of the Internet as a series of highways and information as cars. If there's only one car on the highway, that car will travel quickly and easily. If there are many cars, however, traffic can build up and slow things down. The Internet works the same way -- if only one person is downloading one file, the transfer should happen fairly quickly. If several people are trying to download the same file, though, the transfer can be much slower. In this analogy, bandwidth is the number of lanes on the highway. If a Web site's bandwidth is too low, traffic will become congested. If the Web site increases its bandwidth, information will be able to travel back and forth without much of a hassle. Bandwidth is important for Internet TV, because sending large amounts of video and audio data over the Internet requires large bandwidths.
The second important part of Internet TV is streaming audio and video. Streaming technology makes it possible for us to watch live or on-demand video without downloading a copy directly to a computer.
There are a few basic steps to watching streaming audio and video:
1. A server holds video data.
2. When you want to watch a video, you click the right command, like "Play" or "Watch." This sends a message to the server, telling it that you want to watch a certain video.
3. The server responds by sending you the necessary data. It uses streaming media protocols to make sure the data arrives in good condition and with all the pieces in the right order.
4. A plug-in or player on your computer -- Windows Media Player and RealPlayer are two popular examples -- decodes and plays the video signal.

Mobile IPTV

Mobile IPTV is a technology that enables users to transmit and receive multimedia traffic including television signal, video, audio, text and graphic services through IP-based the wired and wirelessnetworks with support for QoS/QoS, security, mobility, and interactive functions. Through Mobile IPTV, users can enjoy IPTV services anywhere andeven while on the move. In fact, IPTV is composed of Internet Protocol (IP) and TV. In other words, it implies a traditionalTV services are being migrated and converged IntoInternet space. As long as we use Internet, IP is avital component and all of advantages of IP can be used for IPTV services. Everyone agrees that IP hasplayed and will play a major role in the evolution ofnetworks and services. IP allows you to make use ofall IP based services including IPTV servicesanywhere on earth through Internet. The major goal of this paper is to raise theinterests and concerns of Mobile IPTV including thestatus of standard activities when deploying IPTVservices over wireless and mobile networks, andexpand the value of IPTV in the structure ofeveryday life.
Mobile TV Plus IP Approach
This approach uses the traditional digital broadcast networks to deliver IP-based audio, video, graphics and other broadband data services to the user on the move. This is a prime example of the increasing convergence of broadcasting, telecommunications and computing. The reason why it is pursued is to build a content environment that combines the stability and usability of broadcasting and the diverse services of Internet. To make this approach more attractive, wide areawireless networks such as cellular networks are integrated to support interactivity. The outstanding activities in this approach are Digital Video Broadcast (DVB)-CBMS (Convergence of Broadcasting and Mobile Services) andWorld DAB (DAB: Digital Audio Broadcasting) Forum DVB-CBMS is developing bi-directional mobile IP based broadcasting protocol specifications over DVB-H [6]. DVB-CBMS already finished Phase I and currently is working in Phase II.
WorldDAB Forum is enhancing and extending Eureka 147 to support IP based services. Eureka 147 was originally developed for digital radio applications and extended to support video services. Even though this approach is classified as Mobile IPTV technically, the usage of broadcasting networks may incur the loss of individuality of IP.
Applications
Besides games, many other different kind of applications, including e.g. EPGs (Electronic Program Guide), Weather services, TV Program related applications, Interactive Advertisements, T-Government services and Chats. IPTV allows service providers to easily offer also Video related Interactive services. Video Conferencing application, Karaoke-On-Demand video content, and automatic and interactive Jukebox service for either high quality content like Music videos or the User Generated Content type of services.“Providing mobile TV free in return for receiving advertising offers perhaps the best route for boosting the numbers watching mobile TV. However, the obstacles….mean that it is difficult to see it fulfilling much more than a niche viewing role in the medium term.

 

5 Pen PC Technology

Computer affects our life in a much bigger way then most of us might have thought. It has become a compulsory requirement in most professions to be able to use computer software. The first computer invented was the ENIAC in 1943 which was the same size of a large room, consuming as much power as several hundred modern PCs. Modern computers which are based on integrated circuits are small enough to fit into mobile devices. One of the most compacted computers out right now are table computers with the most popular being the IPad, but even that is 9.1inch and weighing about 700grams. But imagine having a computer will fit on your pencil case.
P-ISM (“Pen-style Personal Networking Gadget Package”), which is nothing but the new discovery, which is under developing, stage by NEC Corporation. P-ISM is a gadget package including five functions: a pen-style cellular phone with a handwriting data input function, virtual keyboard, a very small projector, camera scanner, and personal ID key with cashless pass function. P-ISMs are connected with one another through short-range wireless technology. The whole set is also connected to the Internet through the cellular phone function. This personal gadget in a minimalist pen style enables the ultimate ubiquitous computing

It seems to many of us these days that the pace of technological change is so great that it outstrips our imaginations — just as soon as we can conceive of the next nifty electronic gadget we'd like to have, we find out that somebody has already built it.
Miniaturized devices such as cameras and telephones are examples of now-common technologies that just a few years ago most of us rarely encountered outside the fictional world of spy thrillers. Miniaturized personal computers are the next logical step, but many readers might be surprised to learn that a plan for PC components housed in devices the size and shape of ballpoint pens (as shown above) was showcased by a major electronics company over two years ago.
ISM
P-ISM is a gadget package including five functions: a pen-style cellular phone with a handwriting data input function, virtual keyboard, a very small projector, camera scanner, and personal ID key with cashless pass function. P-ISMs are connected with one another through short-range wireless technology. The whole set is also connected to the Internet through the cellular phone function. This personal gadget in a minimalistic pen style enables the ultimate ubiquitous computing.
The P-ISM system was based on "low-cost electronic perception technology" produced by the San Jose, California, firm of Canesta, Inc., developers of technologies such as the "virtual keyboard" (although the last two pictures shown above appear to be virtual keyboard products sold by other companies such as VKB rather than components of the P-ISM prototype).
P-ISMs are connected with one another through short-range wireless technology. The whole set is also connected to the Internet through the cellular phone function. They are connected through Tri-wireless modes (Blue tooth, 802.11B/G, and Cellular) which are made small and kept in a small pen like device.
Connectivity 802.11B/G and Blue tooth
In fact, no-one expects much activity on 802.11n installations until the middle of 2008. “Rolling out 802.11n would mean a big upgrade for customers who already have full Wi-Fi coverage, and would be a complex add-on to existing wired networks, for those who haven't. Bluetooth is widely used because we can able to transfer data or make connections without wires. This is very effective because we can able to connect whenever we need without having wires. They are used at the frequency band of 2.4 GHz ISM (although they use different access mechanisms). Blue tooth mechanism is used for exchanging signal status information between two devices. This techniques have been developed that do not require communication between the two devices (such as Blue tooth’s Adaptive Frequency Hopping), the most efficient and comprehensive solution for the most serious problems can be accomplished by silicon vendors. They can implement information exchange capabilities within the designs of the Blue tooth. The circuit diagram for the 802.11B/G is given below. It is nothing but also type of Blue tooth. Using this connectivity we can also connect it with the internet and can access it anywhere in the world.
The role of monitor is taken by LED Projector which projects on the screen. The size of the projector is of A4 size. It has the approximate resolution capacity of 1024 X 768. Thus it is gives more clarity and good picture.


3D Television

Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted "virtual reality" television would allow people to view high definitionimages in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.

The evolution of visual media such as cinema and television is one of the major hallmarks of our modern civilization. In many ways, these visual media now define our modern life style. Many of us are curious: what is our life style going to be in a few years? What kind of films and television are we going to see? Although cinema and television both evolved over decades, there were stages, which, in fact, were once seen as revolutions:
1) at first, films were silent, then sound was added;
2) cinema and television were initially black-and-white, then color was introduced;
3) computer imaging and digital special effects have been the latest major novelty.

BASICS OF 3D TV

Human gains three-dimensional information from variety of cues. Two of the most important ones are binocular parallax & motion parallax.
A. Binocular Parallax
It means for any point you fixate the images on the two eyes must be slightly different. But the two different image so allow us to perceive a stable visual world. Binocular parallax defers to the ability of the eyes to see a solid object and a continuous surface behind that object even though the eyes see two different views.
B. Motion Parallax
It means information at the retina caused by relative movement of objects as the observer moves to the side (or his head moves sideways). Motion parallax varies depending on the distance of the observer from objects. The observer's movement also causes occlusion (covering of one object by another), and as movement changes so too does occlusion. This can give a powerful cue to the distance of objects from the observer.
C. Depth perception
It is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object. The small distance between our eyes gives us stereoscopic depth perception[7]. The brain combines the two slightly different images into one 3D image. It works most effectively for distances up to 18 feet. For objects at a greater distance, our brain uses relative size and motion As shown in the figure, each eye captures its own view and the two separate images are sent on to the brain for processing. When the two images arrive simultaneously in the back of the brain, they are united into one picture. The mind combines the two images by matching up the similarities and adding in the small differences. The small differences between the two images add up to a big difference in the final picture ! The combined image is more than the sum of its parts. It is a three-dimensional stereo picture.

The whole system consists mainly three blocks:
1 Aquisition
2. Transmission
3. Display Unit
A. Acquisition
The acquisition stage consists of an array of hardware-synchronized cameras. Small clusters of cameras are connected to the producer PCs. The producers capture live, uncompressed video streams & encode them using standard MPEG coding. The compressed video then broadcast on separate channels over a transmission network, which could be digital cable, satellite TV or the Internet.
Generally they are using 16 Basler A101fc color cameras with 1300X1030, 8 bits per pixel CCD sensors.
1) CCD Image Sensors: Charge coupled devices are electronic devices that are capable of transforming a light pattern (image) into an electric charge pattern (an electronic image).
2) MPEG-2 Encoding: MPEG-2 is an extension of the MPEG-1 international standard for digital compression of audio and video signals. MPEG-2 is directed at broadcast formats at higher data rates; it provides extra algorithmic 'tools' for efficiently coding interlaced video, supports a wide range of bit rates and provides for multichannel surround sound coding. MPEG- 2 aims to be a generic video coding system supporting a diverse range of applications. They have built a PCI card with custom programmable logic device (CPLD) that generates the synchronization signal for all the cameras. So, what is PCI card?
3) PCI Card:
There's one element the bus. Essentially, a bus is a channel or path between the components in a computer. We will concentrate on the bus known as the Peripheral Component Interconnect (PCI). We'll talk about what PCI is, how it operates and how it is used, and we'll look into the future of bus technology.
All 16 cameras are individually connected to the card, which is plugged into the one of the producer PCs. Although it is possible to use software synchronization, they consider precise hardware synchronization essential for dynamic scenes. Note that the price of the acquisition cameras can be high, since they will be mostly used in TV studios. They arranged the 16 cameras in regularly spaced linear array










3D DISPLAY
3This is a brief explanation that we hope sorts out some of the confusion about the many 3D display options that are available today. We'll tell you how they work, and what the relative tradeoffs of each technique are. Those of you that are just interested in comparing different Liquid Crystal Shutter glasses techniques can skip to the section at the end. Of course, we are always happy to answer your questions personally, and point you to other leading experts in the field[4]. Figure shows a diagram of the multi-projector 3D displays with lenticular sheets.
They use 16 NEC LT-170 projectors with 1024'768 native output resolution. This is less that the resolution of acquired & transmitted video, which has 1300'1030 pixels. However, HDTV projectors are much more expensive than commodity projectors. Commodity projector is a compact form factor. Out of eight consumer PCs one is dedicated as the controller. The consumers are identical to the producers except for a dual-output graphics card that is connected to two projectors. The graphic card is used only as an output device. For real-projection system as shown in the figure, two lenticular sheets are mounted back-to-back with optical diffuser material in the center. The front projection system uses only one lenticular sheet with a retro reflective front projection screen material from flexible fabric mounted on the back. Photographs show the rear and front projection
3D Television


4G Broadband

The early days of home Internet access required using a modem connected to a computer to dial a number and maintain a connection. It was cumbersome and slow. The faster modems became, the more people realized how painfully sluggish data transmission had been in the days of 300 baud. Eventually, home users who could afford a jump in price could get Broadband access via digital subscriber lines (DSL), cable and satellite. Technology changes from day-to-day; this is also happening in case of networking. So many breakthroughs in the realm of science forced the way of networking from wired to wireless, which is very inexpensive and efficient. A new technology that provides dynamic connectivity to a network through wireless which is called as Wi-Fi (Wireless Fidelity), works on the principle of Radio transmission, but Wi-Fi is accessible only to a limited area, In this paper we are going to present about a technology which breaks the problems like limited area connectivity and also ECO Friendly, this can be possible with the help of WiMax(Worldwide Interoperability for Microwave Access), which supports the concept of Internet everywhere. However people are connected by opening up the Internet to create a more spontaneous and empowering broadband experience.


If we have been in an airport, coffee shop, library or hotel recently, chances that we been right in the middle of a wireless network. Many people also use wireless networking, also called Wi-Fi or 802.11 networking. In the near future, wireless networking may become so widespread that you can access the Internet just about anywhere at any time, without using wires, wireless networks are easy to set up and inexpensive.
Wireless network uses radio waves, just like cell phones, televisions and radios do. In fact, communication across a wireless network is a lot like two-way radio communication
1. A computer's wireless adapter translates data into a radio signal and transmits it using an antenna.
2. A wireless router receives the signal and decodes it. It sends the information to the Internet using a physical, wired Ethernet connection.
The process also works in reverse, with the router receiving information from the Internet, translating it into a radio signal and sending it to the computer's wireless adapter

Need for WiMax
WiMAX outdistances Wi-Fi by miles, WiMAX is short for Worldwide Interoperability for Microwave Access, and it also goes by the IEEE name 802.16, WiMAX would receive data from the WiMAX transmitting station, probably using encrypted data keys to prevent unauthorized users from stealing access this is the main advantage. In this way network security is also embedded.
WiMAX has the potential to do to broadband Internet access what cell phones have done to phone access. In the same way that many people have given up their "land lines" in favor of cell phones, WiMAX could replace cable and DSL services, providing universal Internet access just about anywhere you go. WiMAX will also be as painless as Wi-Fi -- turning your computer on will automatically connect you to the closest available WiMAX antenna
A WiMAX tower station can connect directly to the Internet using a high-bandwidth, wired connection (for example, a T3 line). It can also connect to another WiMAX tower using a line-of-sight, microwave link. This connection to a second tower (often referred to as a backhaul), along with the ability of a single tower to cover up to 3,000 square miles, is what allows WiMAX to provide coverage to remote rural areas.
Wi-Fi-style access will be limited to a 4-to-6 mile radius (perhaps 25 square miles or 65 square km of coverage, which is similar in range to a cell-phone zone). Through the stronger line-of-sight antennas, the WiMAX transmitting station would send data to WiMAX-enabled computers or routers set up within the transmitter's 30-mile radius (2,800 square miles or 9,300 square km of coverage). This is what allows WiMAX to achieve its maximum range

Internet service provider sets up a WiMAX base station 10 miles from our home. we would buy a WiMAX-enabled computer or upgrade our old computer to add WiMAX capability. we would receive a special encryption code that would give you access to the base station. The base station would beam data from the Internet to our computer (at speeds potentially higher than today's cable modems), for which we would pay the provider a monthly fee. The cost for this service could be much lower than current high-speed Internet-subscription fees because the provider never had to run cables. The WiMAX protocol is designed to accommodate several different methods of data transmission, one of which is Voice Over Internet Protocol (VoIP). VoIP allows people to make local, long-distance and even international calls through a broadband Internet connection, bypassing phone companies entirely. If WiMAX-compatible computers become very common, the use of VoIP could increase dramatically. Almost anyone with a laptop could make VoIP calls.
XOHM (4G Technology):
XOHM is coming – providing next-generation mobile broadband across your city. With XOHM, you no longer need to find a hotspot for a broadband internet experience – the hotspot comes with you. There are no compromises here – even if it’s streaming fullscreen video. And with XOHM, you have one account and it’s always available. No long-term contracts – you can pay by the day, the month or the year
XOHM won’t just connect WiMAX-enabled products to the internet it’ll allow them to connect across the network to each other. We expect this to open exciting new experiences beyond just getting online with the potential to change how we communicate, enjoy, and achieve - for example:
• Health: a mobile health monitor could track and transmit a user’s vitals and alert a hospital or caregiver in case of an emergency.
• Sports: a runner’s performance could be monitored by WiMAX-enabled chips built into her shoes to be shared with coaches, peers or spectators.
• Home Entertainment: While you’re out of town, your WiMAX-enabled DVR could send a reminder to your phone that your favorite TV show is about to start - command it to record the show to watch later via your WiMAX-enabled portable video player 

4G Broadband

Next Generation Internet
Get ready to experience how spontaneous the internet can be. With XOHM mobile broadband, you’ll be able to:
 Stream movies
 Watch a video
 Download music
 Share photos
 Play games
 Instant Message
 E-mail
 Surf the web
Or whatever you want - around your home, office or on the go, wherever there’s XOHM coverage - all on the same connection
Plug and Play:
Getting started with XOHM is a snap: no wires means no service calls, drilling, or digging – just plug and play.
Compendium:
It is to be concluded that among all communication interfaces wireless is better one, in that going to WiMax is the better solution in all aspects, now some of the companies are trying to establish their networks using this technolog


Spyware

Over the last several years, a loosely defined collection of computer software known as “Spyware” has become the subject of growing public alarm. Computer users are increasingly finding programs on their computers that they did not know were installed and that they cannot uninstall, that create privacy problems and open security holes, that can hurt the performance and stability of their systems, and that can lead them to mistakenly believe that these problems are the fault of another application or their Internet provider.

The term “spyware” has been applied to everything from keystroke loggers, to advertising applications that track users’ web browsing, to web cookies, to programs designed to help provide security patches directly to users. More recently, there has been particular attention paid to a variety of applications that piggyback on peer-to-peer file-sharing software and other free downloads as a way to gain access to people’s computers. This report focuses primarily on these so-called “adware” and other similar applications, which have increasingly been the focus of legislative and regulatory proposals.

Many of these applications represent a significant privacy threat, but in our view the larger concerns raised by these programs are transparency and user control, problems sometimes overlooked in discussions about the issue and to a certain extent obscured by the term “spyware” itself.

Store Management System

The system creates a standalone system that enables a store to schedule its items purchase operations based on the daily update of sales from its customers. Once the sales figures of items reaches minimum stock availability figure, purchase order for the deficient items would be automatically placed. This application also gives a pictorial representation of the sales and stock of the store. Pictorial representation helps the store keeper to analyze the sales and stock of a product.
This application has two users Administrator and Cashier. The administrator can enter the new stock details view the graphical representation. Administrator can enter the selling price and cost price to track the profit and loss. The cashier can only bill the products bought by a customer.
The automatic purchase order for the deficient products is placed by an SMS and email alert.
This application requires internet connection to place the SMS and email purchase order alerts.

Analog Communication Systems

Analog And Digital System
Analog System:
A string tied to a doorknob would be an analog system. They have a value that changes steadily over time and can have any one of an infinite set of values in a range. If you put a measuring stick or ruler at a specific point along the string, you can measure the string's value every so many seconds at that point. When you watch it move, you will see it moves constantly. It doesn't instantly jump up and down the ruler.

Digital System:
A digital system would be to flick the light switch on and off. There's no 'in between' values, unlike our string. If the switch you are using is not a dimmer switch, then the light is either on, or off. In this case, the transmitter is the light bulb, the media is the air, and the receiver is your eye. This would be a digital system.

Analog And Digital Signal
Analog Signal:
An analog signal is any time continuous signal where some time varying feature of the signal is a representation of some other time varying quantity. An analog signal is a datum that changes over time-say, the temperature at a given location; the depth of a certain point in a pond; or the amplitude of the voltage at some node in a circuit.

In analog technology, a wave is recorded or used in its original form. So, for example, in an analog tape recorder, a signal is taken straight from the microphone and laid onto tape. The wave from the microphone is an analog wave, and therefore the wave on the tape is analog as well. That wave on the tape can be read, amplified and sent to a speaker to produce the sound.
In analog signal the value could change between a negative value to positive or from zero to a positive value.

Digital Signal:
It can refer to discrete-time signals that are digitized, or to the waveform signals in a digital system. Digital signals are digital representations of discrete-time signals, which are often derived from analog signals. A discrete-time signal is a sampled version of an analog signal, the value of the datum is noted at fixed intervals (for example, every microsecond) rather than continuously.

In digital technology, the analog wave is sampled at some interval, and then turned into numbers that are stored in the digital device. On a CD, the sampling rate is 44,000 samples per second. So on a CD, there are 44,000 numbers stored per second of music. To hear the music, the numbers are turned into a voltage wave that approximates the original wave.

The digital signal only recognises values at or around 2 points and interprets them as a logic 1 or 0.

Android Technology

A mobile operating system, also referred to as mobile OS, is the operating system that operates a Smartphone, tablet, PDA, or other digital mobile devices. Modern mobile operating systems combine the features of a personal computer operating system with touch screen, cellular, Bluetooth, Wi-Fi, GPS mobile navigation, camera, video camera, speech recognition, voice recorder, music, Near field communication, personal digital assistant (PDA), and other features. The smart phone market is dominated by three major operating systems which are Android by Google, iOS by Apple and Windows Phone 7.5 by Microsoft.
These days, hot topics that almost everyone is interested in mobile technology, mobile devices and of course, mobile operating systems. Everyone wants to be able to do everything fast, and on the go. The developers have done a great job feeding us a never-ending stream of new apps, new devices and new hacks. We are like drug-addicted junkies, and we just can’t get enough. I have to admit, I love it! I will be one of those people who will run out and buy an iPhone5 knowing full well that in a year from now it will be obsolete. And, I can’t wait to do it. So it’s a battle in mobile operating system market and we are going to throw light on all three major operating systems.
INTRODUCTION
There have been many revelations in mobile operating system technology but the single biggest revolution was when Apple launched the iPhone. Apple turned a mobile handset from a mere voice and data enabled device to a “super-cool gizmo.” So far Apple has reigned as the king of mobile. But Google’s Android has launched a volley of successful attacks on its rival, questioning the dominance of iOS. On the other hand, the newest of the three entrants, the Windows Phone, is still working on building up adoption. Some might even say it is now a three-system world, but each OS has its own benefits and challenges.
This year has been one of great operating systems for mobile phones and the battle between them which continues for some years now. The two main opponents of today are the Apple iOS 6 and the Android 4.1 known under the nickname of Jelly Bean. This continuing competition grabs the attention of many users as they are interested in the new features of both operating systems. While some are devoted fans to one of these two, others are more cautious and wait to see what the pros and cons are in this battle. But this battlefield is only to the benefit of the users and usually competition gets the best out of technology, while it can get the worst out of people.

Antivirus Solutions

Virus attacks and intrusion attempts have been causing lots of troubles and serious damages to almost all the computer users. Ever the day, one starts using a computer, virus infection becomes an issue of concern. One is always left in a frightened situation, worried about the security of crucial data, completion of mission critical tasks and achievement of important goals.

Antivirus software currently available is particularly suitable for detecting and eliminating known viruses. This traditional concept is becoming obsolete because it doesn’t do anything about new threats. Encrypted viruses pose a major headache. These are viruses coded using encryption software, which cannot be identified by antivirus software. The only product which can defend against these is antivirus software with so-called “sandboxing” abilities. This means that they can track down and neutralize viruses despite their encryption. This is modeled on the multiple operating systems at the same time concept. It allows us to run malicious code in a protected environment so that the code can’t harm our data. Sandboxing can protect our system against unknown threats because it operates within a few simple rules. We could, for example, define our system registry as being off-limits to changes.

Sandboxing is where an antivirus program will take suspicious code and run it in a Virtual Machine (secure from the rest of the system) in order to see exactly how the code works and what its purpose is. The proactive antivirus technology basically involves enclosing a running application in what is called a “SANDBOX”. The sandbox is responsible for trapping downloaded applications in a controlled environment such as the temporary files folder where it monitors them for malicious code. This means that before we have a chance to release a potentially harmful virus into our network, the software will lock it away from critical network resources.

Protein Memory

Since the dawn of time, man has tried to record important events and techniques for everyday life. At first, it was sufficient to paint on the family cave wall how one hunted. Then came the people who invented spoken languages and the need arose to record what one was saying without hearing it firsthand. Therefore, years later, more early scholars invented writing to convey what was being said. Pictures gave way to letters which represented spoken sounds. Eventually clay tablets gave way to parchment, which gave way to paper. Paper was, and still is, the main way people convey information. However, in the mid twentieth century computers began to come into general use .

Computers have gone through their own evolution in storage media. In the forties, fifties, and sixties, everyone who took a computer course used punched cards to give the computer information and store data. In 1956, researchers at IBM developed the first disk storage system. This was called RAMAC (Random Access Method of Accounting and Control) Since the days of punch cards, computer manufacturers have strived to squeeze more data into smaller spaces. That mission has produced both competing and complementary data storage technology including electronic circuits, magnetic media like hard disks and tape, and optical media such as compact disks.

Artificial Intelligence

Artificial intelligence (AI) is defined as intelligence exhibited by an artificial entity. Such a system is generally assumed to be a computer. Although AI has a strong science fiction connotation, it forms a vital branch of computer science, dealing with intelligent behaviour, learning and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become a scientific discipline, focused on providing solutions to real life problems. AI systems are now in routine use in economics, medicine, engineering and the military, as well as being built into many common home computer software applications, traditional strategy games like computer chess and other video games.

We tried to explain the brief ideas of AI and its application to various fields. It cleared the concept of computational and conventional categories. It includes various advanced systems such as Neural Network, Fuzzy Systems and Evolutionary computation. AI is used in typical problems such as Pattern recognition, Natural language processing and more. This system is working throughout the world as an artificial brain.

Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered ``somewhat intelligent''. It is related to the similar task of using computers to understand human intelligence.

We can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do. We discussed conditions for considering a machine to be intelligent. We argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent.

Digital Investigation Process

Digital forensics has been defined as the use of scientifically derived and proven methods towards the preservation, collection, validation, identification, analysis, interpretation and presentation of digital evidence derived from digital sources for the purpose of facilitating or furthering the reconstruction of events found to be criminal or helping to anticipate the unauthorized actions shown to be disruptive to planned operations [3].

One important element of digital forensics is the credibility of the digital evidence. Digital evidence includes
computer evidence, digital audio, digital video, cell phones, digital fax machines etc. The legal settings desire evidence to have integrity, authenticity, reproductivity, non-interference and minimization.

Computer crimes are on the rise and unfortunately less than two percent of the reported cases result in conviction. The process (methodology and approach) one adopts in conducting a digital forensics investigation is immensely crucial to the outcome of such an investigation.

Overlooking one step or interchanging any of the steps may lead to incomplete or inconclusive results hence wrong interpretations and conclusions. A computer crime culprit may walk Scot-free or an innocent suspect may suffer negative consequences (both monetary and otherwise) simply on account of a forensics investigation that was inadequateor improperly conducted.

Touch Screens

Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.

Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information

A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you.

Universal Broadband Connectivity

This paper outlines a migration path towards Universal Broadband Connectivity, motivated byhighlighting the advantages of and policy implications for a novel asynchronous wireless communications network.

We argue that the cost of real-time, circuit-switched communications is sufficiently high that it may be the wrong starting point for rural connectivity. Based on market data for information and communication technology (ICT) services in rural India, we propose a combination of wireless technology with an asynchronous mode of communications that offers a means of introducing ICTs with:

The Digital Divide is just as much about a gap in understanding as it is a gap in connectivity. There are often clear fundamental differences between what is proposed by technology visionaries, many of whom have never seen a village, and what is actually needed by end-users, many of whom have never used a telephone.

Bluetooth Technology

luetooth is a radio frequency specification for short range, point to point and point to multi point voice and data transfer. Bluetooth technology facilitates the replacement of cables normally used to connect one device to another by a short range radio link. With the help of blue tooth we can operate our keyboard and mouse without direct connection of CPU. Printers, fax machines, headphone, mouse, keyboard or any other digital devices can be part of Bluetooth system.
In spite of facilitating the replacement of cables, Bluetooth technology works as an universal medium to bridge the existing data networks, a peripheral interface for existing devices and provide a mechanism to form short ad-hoc network of connected devices away from fixed network infrastructures.
Due to their independence on short range radio link, Bluetooth devices do not require a line of site connection in order to communicate. Therefore a computer can print information on a printer if printer is in inside the room. Two blue tooth devices can talk to each other when they come within range of 10 meters to each other.
Bluetooth technology represents an opportunity for the industry to deliver wireless solutions that are ubiquitous across a broad range of devices.

Smart Quill

Lyndsay Williams of Microsoft Research's Cambridge UK lab is the inventor of the Smartquill,a pen that can remember the words that it is used to write, and then transform them into computer text . The idea that "it would be neat to put all of a handheld-PDA type computer in a pen," came to the inventor in her sleep . “It’s the pen for the new millennium,” she says. Encouraged by Nigel Ballard, a leading consultant to the mobile computer industry,

Williams took her prototype to the British Telecommunications Research Lab, where she was promptly hired and given money and institutional support for her project. The prototype, called SmartQuil, has been developed by world-leading research laboratories run by BT (formerly British Telecom) at Martlesham, eastern England. It is claimed to be the biggest revolution in handwriting since the invention of the pen.

SmartQuill contains sensors that record movement by using the earth's gravity system, irrespective of the platform used. The pen records the information inserted by the user. Your words of wisdom can also be uploaded to your PC through the “digital inkwell”, while the files that you might want to view on the pen are downloaded to SmartQuill as well.

Sixth Sense Technology

We’ve evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses which includes eye, ear, nose, tongue mind and body to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online.
Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen. SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘SixthSense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer. “Sixth Sense Technology”, it is the newest jargon that has proclaimed its presence in the technical arena. This technology has emerged, which has its relation to the power of these six senses. Our ordinary computers will soon be able to sense the different feelings accumulated in the surroundings and it is all a gift of the ”Sixth Sense Technology” newly introduced. SixthSense is a wearable “gesture based” device that augments the physical world with digital information and lets people use natural hand gestures to interact with that information.
Right now, we use our “devices” (computers, mobile phones, tablets, etc.) to go into the internet and get information that we want. With SixthSense we will use a device no bigger than current cell phones and probably eventually as small as a button on our shirts to bring the internet to us in order to interact with our world!
SixthSense will allow us to interact with our world like never before. We can get information on anything we want from anywhere within a few moments! We will not only be able to interact with things on a whole new level but also with people! One great part of the device is its ability to scan objects or even people and project out information regarding what you are looking at.

What is SixthSense?

Sixth Sense in scientific (or non-scientific) terms is defined as Extra Sensory Perception or in short ESP. It involves the reception of information not gained through any of the five senses. Nor is it taken from any experiences from the past or known. Sixth Sense aims to more seamlessly integrate online information and tech into everyday life. By making available information needed for decision-making beyond what we have access to with our five senses, it effectively gives users a sixth sense.

MPEG-7 Format

Accessing audio and video used to be a simple matter - simple because of the simplicity of the access mechanisms and because of the poverty of the sources. An incommensurable amount of audiovisual information is becoming available in digital form, in digital archives, on the World Wide Web, in broadcast data streams and in personal and professional databases, and this amount is only growing. The value of information often depends on how easy it can be found, retrieved, accessed, filtered and managed.
The transition between the second and third millennium abounds with new ways to produce, offer, filter, search, and manage digitized multimedia information. Broadband is being offered with increasing audio and video quality and speed of access. The trend is clear: in the next few years, users will be confronted with such a large number of contents provided by multiple sources that efficient and accurate access to this almost infinite amount of content seems unimaginable today. In spite of the fact that users have increasing access to these resources, identifying and managing them efficiently is becoming more difficult, because of the sheer volume. This applies to professional as well as end users. The question of identifying and managing content is not just restricted to database retrieval applications such as digital libraries, but extends to areas like broadcast channel selection, multimedia editing, and multimedia directory services.
This challenging situation demands a timely solution to the problem. MPEG-7 is the answer to this need.
MPEG-7 is an ISO/IEC standard developed by MPEG (Moving Picture Experts Group), the committee that also developed the successful standards known as MPEG-1 (1992) and MPEG-2 (1994), and the MPEG-4 standard (Version 1 in 1998, and version 2 in 1999). The MPEG-1 and MPEG-2 standards have enabled the production of widely adopted commercial products, such as Video CD, MP3, digital audio broadcasting (DAB), DVD, digital television (DVB and ATSC), and many video-on-demand trials and commercial services. MPEG-4 is the first real multimedia representation standard, allowing interactivity and a combination of natural and synthetic material coded in the form of objects (it models audiovisual data as a composition of these objects). MPEG-4 provides the standardized technological elements enabling the integration of the production, distribution and content access paradigms of the fields of interactive multimedia, mobile multimedia, interactive graphics and enhanced digital television.
The MPEG-7 standard, formally named "Multimedia Content Description Interface", provides a rich set of standardized tools to describe multimedia content. Both human users and automatic systems that process audiovisual information are within the scope of MPEG-7.
MPEG-7 offers a comprehensive set of audiovisual Description Tools (the metadata elements and their structure and relationships, that are defined by the standard in the form of Descriptors and Description Schemes) to create descriptions (i.e., a set of instantiated Description Schemes and their corresponding Descriptors at the users will), which will form the basis for applications enabling the needed effective and efficient access (search, filtering and browsing) to multimedia content. This is a challenging task given the broad spectrum of requirements and targeted multimedia applications, and the broad number of audiovisual features of importance in such context.
MPEG-7 has been developed by experts representing broadcasters, electronics manufacturers, content creators and managers, publishers, intellectual property rights managers, telecommunication service providers and academia.

Aeronautical Communication

The demand for making air traveling more 'pleasant, secure and productive for passengers is one of the winning factors for airlines and aircraft industry. Current trends are towards high data rate communication services, in particular Internet applications. In an aeronautical scenario global coverage is essential for providing continuous service. Therefore satellite communication becomes indispensable, and together with the ever increasing data rate requirements of applications, aeronautical satellite communication meets an expansive market.

Wireless Cabin (IST -2001-37466) is looking into those radio access technologies to be transported via satellite to terrestrial backbones . The project will provide UMTS services, W-LAN IEEE 802.11 b and Blue tooth to the cabin passengers. With the advent of new services a detailed investigation of the expected traffic is necessary in order to plan the needed capacities to fulfill the QoS demands. This paper will thus describe a methodology for the planning of such system.

In the future, airliners will provide a variety of entertainment and communications equipment to the passenger. Since people are becoming more and more used to their own communications equipment, such as mobile phones and laptops with Internet connection, either through a network interface card or dial-in access through modems, business travelers will soon be demanding wireless access to communication services.

Wireless Cabin Architecture

So far, GSM telephony is prohibited in commercial aircraft due to the uncertain certification situation and the expected high interference levels of the TDMA technology. With the advent of spread spectrum systems such as UMTS and W-LAN, and low power pico-cell access such as Blue tooth this situation is likely to change, especially if new aircraft avionics technologies are considered, or if the communications technologies are in line with aircraft development as today

When wireless access technologies in aircraft cabins are envisaged for passenger service, the most important standards for future use are considered to be: UMTS with UTRAN air interface, Blue tooth, and W-LAN IEEE 802.11 b. Of course, these access technologies will co-exist with each other, beside conventional IP fixed wired networks. The wireless access solution is compatible with other kinds of IFE, such as live TV on board or provision of Internet access with dedicated installed hardware in the cabin seats. Hence, it should not be seen as an alternative to wired architecture in an aircraft, but as a complementary service for the passengers.

Tablet Personal Computer

A tablet PC is a laptop or slate-formed mobile computer that is outfitted with a touchscreen or screen that can be controlled with a digital pen or stylus, or via finger touch. A tablet PC does not require a keyboard or mouse. End-users can directly key in data on a tablet PC. It also offers greater mobility than a conventional laptop.

A tablet computer, or simply tablet, is a medium-sized personal mobile computer integrated into a flat touch screen and primarily using stylus, digital pen or fingertip input along with a virtual onscreen keyboard in lieu of a physical keyboard.

Firstly, older tablet personal computers are mainly x86 based, and are fully functional personal computers employing a slightly modified personal computer OS (Like Windows or Ubuntu Linux) supporting their touch-screen, instead of a traditional display mouse, and keyboard. A typical tablet personal computer needs to be stylus driven, because operating the typical desktop based OS requires a high precision to select GUI widgets, such as a the close window button.

Speed Detection of Moving Vehicle Using Speed Cameras

Although there is good road safety performance the number of people killed and injured on our roads remain unacceptably high. So the roads safety strategy was published or introduced to support the new casualty reduction targets. The road safety strategy includes all forms of invention based on the engineering and education and enforcement and recognizes that there are many different factors that lead to traffic collisions and casualties. The main reason is speed of vehicle. We use traffic lights and other traffic manager to reduce the speed. One among them is speed cameras.
Speed cameras on the side of urban and rural roads, usually placed to catch transgressors of the stipulated speed limit for that road. The speed cameras, the solely to identify and prosecute those drivers that pass by the them when exceed the stipulated speed limit.
At first glance this seemed to be reasonable that the road users do not exceed the speed limit must be a good thing because it increases road safety, reduces accidents and protect other road users and pedestrians.
So speed limits are good idea. To enforce these speed limit; laws are passed making speed an offence and signs are erected were of to indicate the maximum permissible speeds. The police can't be every where to enforce the speed limit and so enforcement cameras art director to do this work; on one who's got an ounce of Commons sense, the deliberately drive through speed camera in order fined and penalized .
So nearly everyone slowdown for the speed Camera. We finally have a solution to the speeding problem. Now if we are to assume that speed cameras are the only way to make driver's slowdown, and they work efficiently, then we would expect there to be a great number of these every were and that day would be highly visible and identifiable to make a drivers slow down.
Speed cameras are invariably hidden behind trees, road signs and often the first indication that one is passing through a speed camera point is the ruler marks painted on the carriageway or flash of the camera that it goes off.
Speed cameras were introduced in west London in 1992 and following their success in reducing speed related crashes and injuries their use expanded to many other areas of Great Britain. The equipment is expensive to buy, operate and maintain and their support in prosecution procedures also much substantial administration costs. However and the cost are small compared to the benefits of society and the economy.
Speed cameras are recommended under use to reduce road casualties. Since these cameras save lives of road users the speed camera is also known as" safety cameras".
Speed camera uses the basic principle of Doppler Effect and RADAR technologies. We can discuss the Doppler Effect in these speed cameras and other working in these cameras.

Handwritten character recognition: Training a simple NN for classification with MATLAB

Character recognition, usually abbreviated to optical character recognition or shortened OCR, is the mechanical or electronic translation of images of handwritten, typewritten or printed text (usually captured by a scanner) into machine-editable text. It is a field of research in pattern recognition, artificial intelligence and machine vision. Though academic research in the field continues, the focus on character recognition has shifted to implementation of proven techniques.
For many document-input tasks, character recognition is the most cost-effective and speedy method available. And each year, the technology frees acres of storage space once given over to file cabinets and boxes full of paper documents.

Problem Description :

Before OCR can be used, the source material must be scanned using an optical scanner (and sometimes a specialized circuit board in the PC) to read in the page as a bitmap (a pattern of dots). Software to recognize the images is also required.
The character recognition software then processes these scans to differentiate between images and text and determine what letters are represented in the light and dark areas.
Older OCR systems match these images against stored bitmaps based on specific fonts. The hit-or- miss results of such pattern-recognition systems helped establish OCR's reputation for inaccuracy.
Today's OCR engines add the multiple algorithms of neural network technology to analyze the stroke edge, the line of discontinuity between the text characters, and the background. Allowing for irregularities of printed ink on paper, each algorithm averages the light and dark along the side of a stroke, matches it to known characters and makes a best guess as to which character it is. The OCR software then averages or polls the results from all the algorithms to obtain a single reading.
OCR software can recognize a wide variety of fonts, but handwriting and script fonts that mimic handwriting are still problematic, therefore additional help of neural network power is required. Developers are taking different approaches to improve script and handwriting recognition. As mentioned above, one possible approach of handwriting recognition is with the use of neural networks.
Neural networks can be used, if we have a suitable dataset for training and learning purposes. Datasets are one of the most important things when constructing new neural network. Without proper dataset, training will be useless. There is also a saying about pre-processing and training of data and neural network: “Rubbish-in, rubbish-out”. So how do we produce (get) a proper dataset? First we have to scan the image. After the image is scanned, we define processing algorithm, which will extract important attributes from the image and map them into a database or better to say dataset. Extracted attributes will have numerical values and will be usually stored in arrays. With these values, neural network can be trained and we can get a good end results. The problem of well defined datasets lies also in carefully chosen algorithm attributes. Attributes are important and can have a crucial impact on end results.

google search engine

Google