Computer Technos

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Sunday, 19 August 2012

Berkeley Motes

Posted on 08:40 by Unknown
The Berkeley motes are a family of embedded sensor nodes sharing roughly the same architecture.

Let us take the MICA mote as an example. The MICA motes have a two-CPU design. The main microcontroller (MCU), an Atmel ATmega103L, takes care of regular processing. A separate and much less capable coprocessor is only active when the MCU is being reprogrammed. The ATmega103L MCU has integrated 512 KB flash memory and 4 KB of data memory. Given these small memory sizes, writing software for motes is challenging. Ideally, programmers should be relieved from optimizing code at assembly level to keep code footprint small. However, high-level support and software services are not free. Being able to mix and match only necessary software components to support a particular application is essential to achieving a small footprint. A detailed discussion of the software architecture for motes.

In addition to the memory inside the MCU, a MICA mote also has a separate 512 KB flash memory unit that can hold data. Since the connection between the MCU and this external memory is via a low-speed serial peripheral interface (SPI) protocol, the external memory is more suited for storing data for later batch processing than for storing programs. The RF communication on MICA motes uses the TR1000 chip set (from RF Monolithics, Inc.) operating at 916 MHz band. With hardware accelerators, it can achieve a maximum of 50 kbps raw data rate. MICA motes implement a 40 kbps transmission rate. The transmission power can be digitally adjusted by software through a potentiometer (Maxim DS1804). The maximum transmission range is about 300 feet in open space.

Like other types of motes in the family, MICA motes support a 51 pin I/O extension connector. Sensors, actuators, serial I/O boards, or parallel I/O boards can be connected via the connector. A sensor/actuator board can host a temperature sensor, a light sensor, an accelerometer, a magnetometer, a microphone, and a beeper. The serial I/O (UART) connection allows the mote to communicate with a PC in real time. The parallel connection is primarily for downloading programs to the mote.

It is interesting to look at the energy consumption of various components on a MICA mote. A radio transmission bears the maximum power consumption. However, each radio packet (e.g., 30 bytes) only takes 4 ms to send, while listening to incoming packets turns the radio receiver ON all the time. The energy that can send one packet only supports the radio receiver for about 27 ms. Another observation is that there are huge differences among the power consumption levels in the active mode, the idle mode, and the suspend mode of the MCU. It is thus worthwhile from an energy-saving point of view to suspend the MCU and the RF receiver as long as possible.

Source of Information : Elsevier Wireless Networking Complete 2010
Read More
Posted in Wireless | No comments

Wednesday, 15 August 2012

Sensor Node Hardware

Posted on 08:26 by Unknown
Sensor node hardware can be grouped into three categories, each of which entails a different
set of trade-offs in the design choices.

● Augmented general-purpose computers: Examples include low-power PCs, embedded PCs (e.g., PC104), custom-designed PCs (e.g., Sensoria WINS NG nodes), 1 and various personal digital assistants (PDAs). These nodes typically run off-the-shelf (OTS) operating systems such as Win CE, Linux, or real-time operating systems and use standard wireless communication protocols such as Bluetooth or IEEE 802.11. Because of their relatively higher processing capability, they can accommodate a wide variety of sensors, ranging from simple microphones to more sophisticated video cameras. Compared with dedicated sensor nodes, PC-like platforms are more power hungry. However, when power is not an issue, these platforms have the advantage that they can leverage the availability of fully supported networking protocols, popular programming languages, middleware, and other OTS software.

● Dedicated embedded sensor nodes: Examples include the Berkeley mote family, the UCLA Medusa family, Ember nodes, 2 and MIT μ AMP. These platforms typically use commercial OTS (COTS) chip sets with emphasis on small form factor, low power processing and communication, and simple sensor interfaces. Because of their COTS CPU, these platforms typically support at least one programming language, such as C. However, in order to keep the program footprint small to accommodate their small memory size, programmers of these platforms are given full access to hardware but barely any operating system support. A classical example is the TinyOS platform and its companion programming language, nesC.

● System-on-chip (SoC) nodes: Examples of SoC hardware include smart dust, the BWRC picoradio node, and the PASTA node. 3 Designers of these platforms try to push the hardware limits by fundamentally rethinking the hardware architecture trade-offs for a sensor node at the chip design level. The goal is to find new ways of integrating CMOS, MEMS, and RF technologies to build extremely low power and small footprint sensor nodes that still provide certain sensing, computation, and communication capabilities. Since most of these platforms are currently in the research pipeline with no predefined instruction set, there is no software platform support available.

Source of Information : Elsevier Wireless Networking Complete 2010

Read More
Posted in Wireless | No comments

Saturday, 11 August 2012

Sensor Network Platforms and Tools

Posted on 08:20 by Unknown
We discussed various aspects of sensor networks, including sensing and estimation, networking, infrastructure services, sensor tasking, and data storage and query. A real-world sensor network application most likely has to incorporate all these elements, subject to energy, bandwidth, computation, storage, and real-time constraints. This makes sensor network application development quite different from traditional distributed system development or database programming. With ad hoc deployment and frequently changing network topology, a sensor network application can hardly assume an always-on infrastructure that provides reliable services such as optimal routing, global directories, or service discovery.

There are two types of programming for sensor networks, those carried out by end users and those performed by application developers. An end user may view a sensor network as a pool of data and interact with the network via queries. Just as with query languages for database systems like SQL, a good sensor network programming language should be expressive enough to encode application logic at a high level of abstraction, and at the same time be structured enough to allow efficient execution on the distributed platform. Ideally, the end users should be shielded away from details of how sensors are organized and how nodes communicate.

On the other hand, an application developer must provide end users of a sensor network with the capabilities of data acquisition, processing, and storage. Unlike general distributed or database systems, collaborative signal and information processing (CSIP) software comprises reactive, concurrent, distributed programs running on ad hoc, resource-constrained, unreliable computation and communication platforms. Developers at this level have to deal with all kinds of uncertainty in the real world. For example, signals are noisy, events can happen at the same time, communication and computation take time, communications may be unreliable, battery life is limited, and so on. Moreover, because of the amount of domain knowledge required, application developers are typically signal and information processing specialists, rather than operating systems and networking experts. How to provide appropriate programming abstractions to these application writers is a key challenge for sensor network software development. In this chapter, we focus on software design issues to support this type of programming.

To make our discussion of these software issues concrete, we first give an overview of a few representative sensor node hardware platforms. We present the challenges of sensor network programming due to the massively concurrent interaction with the physical world. TinyOS for Berkeley motes and two types of nodecentric programming interfaces: an imperative language, nesC, and a dataflow-style language, TinyGALS. Node-centric designs are typically supported by node-level simulators such as ns-2 and TOSSIM. State-centric programming is a step toward programming beyond individual nodes. It gives programmers platform support for thinking in high-level abstractions, such as the state of the phenomena of interest over space and time.

Source of Information : Elsevier Wireless Networking Complete 2010
Read More
Posted in Wireless | No comments

Tuesday, 7 August 2012

Ad Hoc Wireless Sensor Networks

Posted on 08:08 by Unknown
Advances in microelectronics technology have made it possible to build inexpensive, low-power, miniature sensing devices. Equipped with a microprocessor, memory, radio, and battery, such devices can now combine the functions of sensing, computing, and wireless communication into miniature smart sensor nodes , also called motes . Since smart sensors need not be tethered to any infrastructure because of on-board radio and battery, their main utility lies in being ad hoc, in the sense that they can be rapidly deployed by randomly strewing them over a region of interest. Several applications of such wireless sensor networks have been proposed, and there have also been several experimental deployments. Example applications are:

● Ecological Monitoring: wild-life in conservation areas, remote lakes, forest fi res.

● Monitoring of Large Structures: bridges, buildings, ships, and large machinery, such as turbines.

● Industrial Measurement and Control: measurement of various environment and process parameters in very large factories, such as continuous process chemical plants.

● Navigation Assistance: guidance through the geographical area where the sensor network is deployed.

● Defense Applications: monitoring of intrusion into remote border areas; detection, identification, and tracking of intruding personnel or vehicles.

The ad hoc nature of these wireless sensor networks means that the devices and the wireless links will not be laid out to achieve a planned topology. During the operation, sensors would be difficult or even impossible to access and hence their network needs to operate autonomously. Moreover, with time it is possible that sensors fail (one reason being battery drain) and cannot be replaced. It is, therefore, essential that sensors learn about each other and organize into a network on their own. Another crucial requirement is that since sensors may often be deployed randomly (e.g., simply strewn from an aircraft), in order to be useful, the devices need to determine their locations. In the absence of a centralized control, this whole process of self-organization needs to be carried out in a distributed fashion. In a sensor network, there is usually a single, global objective to be achieved. For example, in a surveillance application, a sensor network may be deployed to detect intruders. The global objective here is intrusion detection. This can be contrasted with multihop wireless mesh networks , where we have a collection of source – destination pairs, and each pair is interested in optimizing its individual performance metric. Another characteristic feature of sensor networks appears in the packet scheduling algorithms used. Sensor nodes are battery-powered and the batteries cannot be replaced. Hence, energy-aware packet scheduling is of crucial importance.

A smart sensor may have only modest computing power, but the ability to communicate allows a group of sensors to collaborate to execute tasks more complex than just sensing and forwarding the information, as in traditional sensor arrays. Hence, they may be involved in online processing of sensed data in a distributed fashion so as to yield partial or even complete results to an observer, thereby facilitating control applications, interactive computing, and querying. A distributed computing approach will also be energy-efficient as compared to mere data dissemination since it will avoid energy consumption in long haul transport of the measurements to the observer; this is of particular importance since sensors could be used in large numbers due to their low cost, yielding very high resolutions and large volumes of sensed data. Further, by arranging computations among only the neighboring sensors the number of transmissions is reduced, thereby saving transmission energy. A simple class of distributed computing algorithms would require each sensor to periodically exchange the results of local computation with the neighboring sensors. Thus the design of distributed signal processing and computation algorithms, and the mapping of these algorithms onto a network, is an important aspect of sensor network design.

Design and analysis of sensor networks must take into account the native capabilities of the nodes, as well as architectural features of the network. We assume that the sensor nodes are not mobile . Further, nodes are not equipped with position-sensing technology , like the Global Positioning System (GPS). However, each node can set its transmit power at an appropriate level — each node can exercise power control . Further, each node has an associated sensing radius ; events occurring within a circle of this radius centered at the sensor can be detected.

In general, a sensor network can have multiple sinks, where the traffic generated by the sensor sources leaves the network. We consider networks in which only a single sink is present. Further, we will be concerned with situations in which sensors are randomly deployed . In many scenarios of practical interest, preplanned placing of sensors is infeasible, leaving random deployment as the only practical alternative; e.g., consider a large terrain that is to be populated with sensors for surveillance purposes. In addition, random deployment is a convenient assumption for analytical tractability in models. Our study will also assume a simple path loss model , with no shadowing and no fading in the environment.

Source of Information : Elsevier Wireless Networking Complete 2010
Read More
Posted in Wireless | No comments

Friday, 3 August 2012

MOBILE WEB CONSIDERATIONS

Posted on 08:48 by Unknown
What makes a good mobile website? This is an impossible question to answer, because design and taste are always highly subjective matters. But certain considerations are worth bearing in mind from the start, and these considerations will undoubtedly help you create positive user experiences for your mobile users.



Recognizing Mobile Users
It should go without saying that the most important aspect to developing a mobile website is to ensure that it is available and easy to reach! This sounds straightforward, of course, but it can
actually become relatively involved: It ’ s a fair assumption that existing site owners are very careful to promote and use their current website URL consistently. If you want to create a separate site for your mobile users, should it be a different URL? Should it appear on the same URL? If so, how does the server or CMS know whether to present one type of site or another? How can you cater to user choice and potentially let them switch back and forth between your desktop and mobile sites? How can you publicize the (attractive) fact that the mobile site exists at all? And ensure that it is correctly listed in search engines and directories?

There are glib answers to all these questions, but each has a level of subtly to it, and no matter which technique you use for hosting, selecting, and publicizing your mobile presence, it is inevitable that you will have to distinguish between mobile users and desktop users. In reality, this means detecting between mobile and desktop browsers and then providing different sites or templates accordingly. Users find content in the strangest ways, and it remains the site owner ’ s responsibility to ensure that the right type of experience is given to each type of user. You look at a number of techniques for doing this, both in the general sense, but also for specific content management systems.



Thematic Consistency
A web standards body, the W3C, uses the term thematic consistency . This is not, as you may think, related to themes or the cosmetics of a site, but to the fact that according to the body ’ s “ One Web ” philosophy, the whole Web should be accessible from any device — so given a specific URL, any browser should receive the same content.

This is not to say that the same content should look the same (because the theming of a mobile
web page can be often very different to that of its equivalent desktop page), nor even that users on different devices want to see the same content (because they are quite possibly in a different context, looking for possibly very different things).

But the One Web philosophy is valuable and important, and indeed URLs should always be used in a way that honors the Uniform adjective of the acronym. It would be counterproductive for the
whole mobile web movement if it were to become a disconnected ghetto of content targeted at one type of device alone and did not share links, resources, and content with the vast existing Web.
When you are building your mobile website, think carefully about how its information architecture is sympathetic to this: The same posts, pages, articles, products, and so forth should be easily and consistently accessible from all types of browsers, even if their appearance and surrounding user - interface may be radically different.



Brand Consistency
It is also important to ensure that your own website ’ s brand is preserved between its mobile and desktop versions. There should be consistency between the theming, color schemes, and the look and feel of different types of sites. If your desktop site is light and airy and your mobile site is dark and cluttered, you will confuse your users, many of whom may remember what your desktop site looks like and may find it hard to correlate that with the mobile site, damaging their trust in your brand or site.

The same goes for your logo, color scheme, feature images, graphical elements and so on; within reason you should endeavor to reuse as much as possible between the two sites. Users need to feel that they are interacting with the same brand while being given an entirely optimized, mobile - centric experience.

Similarly, if your desktop site is renowned for a simple, cheerful, and highly efficient user interface and experience, your mobile users will expect the same of the mobile site. Indeed, due to its constraints, a mobile website obviously needs to have even more attention paid to its usability!



A Dedication to Usability
With limited real estate (both physically and in terms of pixels) and often very different input
techniques — not to mention the fact that users may be in a more time - sensitive context, and with a slower network connection — a mobile device needs to be presented with a site interface that is as efficient to use as possible. At the very least, consider carefully any use of excessive forms, multi - paged wizards to complete common or simple tasks, or complex menus to get to critical functionality. These are not likely to be appreciated by mobile users.

Instead, think hard about what mobile users want to do, and ensure that those critical tasks are as heavily optimized as possible on the mobile version of the interface. Arguably this was one of the big successes of the “ native app ” phenomenon: Although many apps were little more than
views of a company ’ s existing web content, the app paradigm allowed interface designers to think entirely afresh about how it could be accessed. The popular pattern of a toolbar at the bottom of an app ’ s screen with four or five important tasks that can be reached with a thumb seems a long way from the lengthy and complex menu bar across the top of a website, but it shows that the same information architecture and fundamental functionality can always be accessed using different user interface techniques. Think hard about which techniques work best for the new medium and types of devices you are targeting.



Remember Mobile
Finally, remember the point about the mobile device being so much more than merely a browser on a small screen. Yes, it ’ s phone, an address book, a game console, and so on, but it ’ s also a device that is in the user ’ s hand nearly every hour of the day, a device that brings unique capabilities and possibilities for you to design to.

Never forget that your mobile is an adjective, not a noun. The important point about the mobile
web is not that the user is holding a mobile phone, but that she is mobile. Make the most of the fact that the visitors to your website don ’ t just have a small screen, rather they are out and about in the real world, living their lives, staying connected — and they want to access everything you have to offer, whenever they want it, in a wonderful mobile way.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Tuesday, 31 July 2012

REVISITING ASSUMPTIONS

Posted on 08:44 by Unknown
Before embarking upon a discussion about the practicalities of developing mobile websites, let ’s
think about some of the opportunities that a mobile web brings and how it should encourage you to revisit many of the assumptions you may have made about today ’ s desktop web.

New places: Whether it ’ s on a train, waiting in line at a bus stop or an airport, walking down a street, working in the fields, lounging on the beach, or snatching glances while driving a car, humans now have the opportunity to access websites from a whole host of new locations — places where it is impractical or impossible to use a desktop or laptop computer. The desktop web gets used from home, the office, and possibly café s and kiosks, but places and situations where users can access the mobile web are innumerable. Think how you can adapt your services and content to cater to people visiting your site from these novel contexts, and with the rise of geolocation capabilities in some modern browsers, you can even start to key your content off them.


New users: The mobile web creates the opportunity to place web content into the hands of new users altogether. It is easy to think that everyone has regular access to a computer connected the Internet, and in some developed markets and for certain demographic groups, that ’ s true. But the availability of fi xed Internet access is already dwarfed by that via mobile devices. The International Telecommunications Union (ITU) estimates that there are 13.6 mobile 3G subscriptions for every 100 people (compared to 8 fi xed broadband connections). But even that is just the start: Only 1 billion of the world ’ s 5.3 billion mobile subscribers have 3G connections. If that proportion grows rapidly over the current years, there will be literally billions of new mobile web users around the world. Suffice to say that this sheer volume can be a huge opportunity for site owners to capitalize on.


New marketing, new business models: The mobile web provides a new way to reach potential
and existing customers. If you run an online business, or an offline one that relies on online marking and promotion, this can significantly open the possibilities for you to grow and
develop your business. Through localized and targeted mobile advertising, you can reach users who are perhaps more in need of your services than ever (a web search for “ plumber ” on a mobile device might imply that the user is in more urgent need of service than from a high - and - dry desktop browser!), and location - based social networks providing check - in functionality (such as Facebook, foursquare, and the like) look set to offer exciting new ways to promote and market certain types of businesses. But the mobile medium itself provides the opportunity for new fulfillment and business models altogether. From phone - based voucher techniques, to games with in - app purchasing, to near - field communication - based commerce, the mobile device offers new ways to interact with customers and create business opportunities.


New types of relationships: Often overlooked is that fact that the mobile web is a medium viewed through the screen of what many consider to be a highly personal piece of consumer electronics. With them all the time and normally held close to their body, the mobile phone is more to many users than simply another gadget: It is their primary link with their friends, their families, and their online lives. A computer rarely engenders as much love and care as a mobile phone, and many believe that this can be an important facet to consider when developing mobile web services. Bland, impersonal web pages might jar with a user ’s perception of what his mobile device represents; he may expect the mobile web to be more immersive, more customized, more personal, and more social. As a site owner, you need to consider how your online presence can capitalize on this more emotional relationship between the user and the medium.


One final point is arguably more important than all of these, and it ’ s one that sows the seeds for
you to be able to really explore the possibility of the mobile web: The mobile phone is so much more than simply a piece of hardware upon which a lonely browser runs. Today ’ s mobile devices are truly the Swiss Army knives of modern society: a phone, an address book, a calendar, a camera, a music player, a compass, a messaging terminal, a game console, and now a web client.

Even if it simply results in ensuring that your business website has a click - to - call link with your telephone number (so the user can dial you straight from the page), keeping this fact in mind is an important step in crafting the shape of this new medium. Using geolocation; allowing social media interactions with users ’ friends and contacts; uploading photos directly from a camera; building web applications that respond to phone orientation, temperature, light levels — the list of ways in which a mobile device could be a more capable web client than a desktop one is almost endless. It ’ s true that this is still an area of much standardization work (privacy and security are important considerations, of course). But what is truly exciting about the potential of the mobile web is that you have barely glimpsed at the possibilities gained by aligning web - based services with the diverse capabilities of these amazing little devices.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Friday, 27 July 2012

A NEW MEDIUM

Posted on 08:38 by Unknown
So what is this mobile web, and why is it something so different that it deserves whole books dedicated to it? Well, on one hand, it is nothing dramatic at all. The fundamental idea is still that you have a browser, a server, some standardized protocols and file formats passing between them, and a user who can view and traverse through content provided by site owners.

And you ’ ve now reached a point where, more or less, those protocols and fi les are written, produced, and interpreted in the same way on a desktop or laptop computer as they are on a mobile device. For markup, most mobile devices accept and handle some sort of XHTML or HTML5; for graphics, they can display PNG, GIF, or JPEG files with high color depth; for styling, at least simple forms of CSS should be understood and interpreted in some way; and, on contemporary devices, JavaScript is feasible for adding interactivity to a mobile website.

So far, so familiar. In terms of technology, you are more or less on familiar ground. You should be careful of one or two things: Flash and Silverlight, for example, are not recommended for widespread use on mobile handsets, because there are major swathes of devices that do not support either, so they should be used selectively at most.

But despite the fact that they build on the same standards, you do need to treat mobile browsers significantly differently from desktop ones. Some of the reasons for this are still technical in nature. A mobile network is not the same as a landline Internet connection, for example. There are considerations of throughput and latency — not to mention a possible cost to the subscriber — when a mobile device is accessing a website over a cellular network. Sensibly, a mobile website should be extremely considerate toward the requirements it makes on the network; large, unwieldy pages that are slow to display and render are clearly not well suited to the challenge.

Also, despite huge advances in processor power and graphics acceleration, most mobile browsers are running on hardware that is well below the specification of an average computer. Sites that put undue load on the CPU or even GPU of a mobile device are likely to be more sluggish than the same site on a desktop device. And even if the handset can provide a decent user experience for such a page, it probably comes at the expense of temperature or battery usage, something that is still at a premium in most handheld devices.

Finally, of course, a mobile device has a different form factor and size to a desktop computer. It certainly has a smaller screen, probably with a different default orientation, and may lack a physical keyboard and almost certainly lacks a mouse. Any website that makes desktop - based assumptions about a particular type of input device or screen size undoubtedly provides a suboptimal experience on a mobile device. For these reasons alone, it is worth thinking about the mobile web as a different medium than the desktop - centric Web that we all use.

But that ’ s not the whole story. Consider cinema and television, for example. There are certainly similarities between them: Both present visual information on screens, people sit and view them,
and both can display the same material in theory. But there is still no doubt that the two are treated as distinct media — even spawning entirely separate industries. Is there more to that distinction than simply the fact that the two have different sized screens?

Yes, of course. And the differences are context and user expectation. A cinema - goer is in a distinct state of mind: possibly out for the evening with friends or family, prepared to pay money for a very particular piece of visual entertainment, and amenable to being presented with a solid period of rich, visual storytelling — the movie he ’ s selected. The television occasionally gets used in this way, of course, but also services different sorts of expectation from its users: turning it on quickly to catch the news, short episodic programming, children ’ s ad - hoc entertainment, or even just as background noise. The way humans want to interact with the technology determines how content gets created for it.

So it is with the mobile web. Yes, many mobile devices can render the same websites as those designed for a desktop screen, but apart from the technical limitations of doing so, the ways in
which the two technologies actually get used can also be very different. A desktop user is sedentary, probably relatively comfortable, and quite probably using the Web for a lengthy session, either professionally or for leisure. A mobile user, snatching time to use her handheld browser, perhaps on the move, is more likely to have a shorter browsing session, has a focused goal in mind, and is far less likely to surf aimlessly about. The mobile user can easily be in a distinctly different state of mind and bringing an entirely different set of expectations to his web browsing experience.

Of course, there will be individual websites where exactly the same content, and exactly the same design, can be presented to multiple types of devices and users in different contexts. A site that comprises merely a simple collection of plain text documents, for example, probably doesn ’ t need to change significantly between mobile and desktop consumption (assuming the layout and typography adapts to different physical constraints).

But few sites on today ’ s Web are as static and immutable as that. Through the prevalence of content management systems, even the simplest personal website is database - driven, dynamically themed, administered online, and encouraging of reader feedback. Not only is it valuable to think about how different types of readers might want to consume the information on such a site, but it ’ s extremely easy to implement solutions that take account of the types of browsers they use, reformatting the page, promoting different sections of the site, resizing graphics for different screens, and so on.

From a site owner ’ s point of view, this is exciting: The mobile web is a distinct enough new medium to consider as a priority when designing and building a site, so it ’ s arguably a revolution . But from a practical point of view on the server, its implementation is merely an evolution : You can use today ’ s tools, today ’ s plug - ins, and your existing experience, and you can make use of the current content and functionality that you provide to the homogenous desktop user base and potentially get it into the hands of billions of mobile users.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Tuesday, 24 July 2012

Dawn of the Modern Mobile Web

Posted on 08:33 by Unknown
The years 2006 and 2007 were seminal in the development of the mobile web. For several years,
high - end mobile devices in Europe, Asia, and the United States had been gaining relatively high - resolution color screens and increasingly powerful processors. Together with a widespread rollout of third Generation (3G) network connectivity, sometimes with fl at rates of data usage, this now meant that many of the constraints of older devices and networks were now removed, and there was a decreasing need to rely on “ lite” pastiches of the Web, such as WAP and i - mode, to deliver information to handsets. Finally, there was a possibility that much of the regular web could be browsed, cost effectively, on high - end mobile devices.

A presage of this change was Nokia ’ s often overlooked decision to develop a port of the WebKit web browser to its Symbian operating system in 2005. (WebKit, the underlying engine of Apple ’ s recently released Safari browser, had been open - sourced by the company that year.)

Nokia ’ s first handsets to carry the resulting S60 Browser were extremely successful, if not entirely because of the browser alone. The fact that most browsers supported WiFi (for fast, free network connectivity) and that even the richest web content could be browsed quite capably (with the help of a full - screen zoom - in/out feature) meant that many developers saw a future in which the mobile device would become a viable first - class citizen of the Web, rather than one crippled by slow bandwidth and prohibitive Internet access.

Any lingering doubts that full mobile web access was just an esoteric dream were shattered in 2007, when Apple — a new entrant to the mobile handset business — launched its iPhone device.
Promoted as a combination of phone, music player, and Internet communicator, a large part of the iPhone ’ s attractiveness to consumers was its ability to render desktop websites with high fidelity, and pan and zoom through them elegantly using a multi - touch screen. The handset came bundled with unlimited data plans from most of its launch carriers.

When first launched, the iPhone did not allow third - party applications to run on it, and Apple encouraged those who wanted to create services for the device to do so through the use of the
web browser. Although the browser could display full web pages, developers quickly realized that users responded well to simple, efficient user - interfaces that mimicked the device ’ s built - in applications. Apple published guidelines for creating websites that adhered to iPhone - based user interface conventions, but which used fully fl edged web standards like HTML and CSS. As a result, thousands of web developers started creating iPhone - ready sites, targeted at these mobile users.

A wholehearted adoption of web technologies for mobile applications stalled somewhat with the
release of v2 of the iPhone operating system. With this release came the ability for developers to create native applications for the platform, together with rich access to device APIs (such as motion sensors, camera access, and geolocation) that the web browser did not provide. In the ensuing “ gold rush, ” thousands of developers fl ocked to developing these native applications — games in particular — that also held the opportunity for monetization through Apple ’ s newly launched App Store. Google ’ s Android platform was also launched in 2008, and while also sporting a very capable web browser, it encouraged developers to write native, rather than web - based, applications.

At the time of this writing, however, the mobile web is back in the ascendency. The irony is arguably that the native application concept has been a victim of its own success: users of many different handset platforms now expect to be given a native app experience, but the proliferation of high - end mobile operating systems means that the costs and effort involved in developing such apps is rapidly rising.

Developing web applications, on the other hand, offers the tempting opportunity to “ develop once, run multiply. ” Diversity between handset types and browsers means that there is still a strong need to create sites and designs that adapt to different browser platforms, screen sizes, and so on, but at least there is a chance to address a wide range of handsets, from the most - capable to the least - capable web - enabled device, with a common set of technologies and with a single development and deployment approach. Add to this the speed with which mobile browser technology is evolving, with which APIs are becoming richer, and with which the underlying standards are being developed, and it is no surprise that it is increasingly accepted that the Web looks set to be the dominant content delivery platform for the mobile generation.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Saturday, 21 July 2012

Wireless Access Protocol

Posted on 08:30 by Unknown
The WAP Forum, formed in 1997, was a standards body dedicated to helping bring web - like access to simple handsets across low - bandwidth mobile networks (such as GSM and GPRS). The WAP standards that were produced, first as a reference v1.0 in 1998, and then as a deployable v1.1 in 1999, defined a whole stack of protocols to help deliver content efficiently across these networks.

Central to the WAP architecture was the role of the WAP gateway, which, like the UP.Link gateway, was responsible for taking content available on web servers hosted on the Internet and essentially compiling it into an efficient bytecode format that the browsers on the handset could efficiently handle and render. Because of this compilation process, content could not be written in arbitrary HTML; it had to be created in strict, well - formed WML — Wireless Markup Language.

WML was an XML - based language and was similar to HDML in that it relied on a card - based paradigm (as shown previously) and shared very few tags with HTML. Web developers who wanted to create sites for WAP handsets needed to craft entirely different markup and interfaces, even when the underlying content was shared with the regular web version of the site. (And unfortunately, the intolerance of many WAP gateways meant that web developers had to emit absolutely perfect XML syntax or risk cryptic errors on their users ’ screens.)

The earliest WAP devices included the iconic Nokia 7110 and the Ericsson R320, both released in 1999 and providing monochromatic access to simple WAP content. Both adhered well to the specifications, supporting simple images in cards, for example, and many pioneering developers
created sites for the devices. Nevertheless, the early hyperbole surrounding the potential of WAP
failed to meet user ’ s expectations: They were unable to “ surf the Internet ” on their mobile devices as they expected, finding that only those few sites that had crafted WML - based versions rendered on their screens.

Further, the increasing numbers of devices that shipped with WAP browsers over the following years brought a huge problem of diversity for site owners. Each browser could interpret certain sections of the WAP specifications differently, and the inconsistencies between implementations were frustrating for a web community that at the time was used to the ease of developing for a single web browser on the desktop environment.

For these, and many other reasons, WAP failed to gain the momentum that had been expected,
and it did not become the worldwide mobile web platform that many had hoped for. Network carriers, worried both about the unreliability of mobile sites on the Internet as a whole and keen to monetize data usage across their networks, often blocked mobile users from accessing arbitrary web addresses from their phones, preferring “ walled gardens ” of content from preferred partners, which often ended up as little more than directories of ringtones, desktop backgrounds, games, and other downloads.

WML underwent a number of revisions before the WAP Forum (which became part of a larger standards body, the Open Mobile Alliance) specified that WAP v2.0 should use a mobile subset of XHTML as its markup language. With that came the end of web developers ’ need to develop pages in an entirely unfamiliar markup and the start of a standards convergence between the modern desktop web (which was gradually, although not universally, adopting XHTML) and the mobile web of the future.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Wednesday, 18 July 2012

i - mode in Japan

Posted on 08:24 by Unknown
In February 1999, the Japanese network carrier NTT DoCoMo launched a service called “ i - mode ” as a feature that allowed mobile subscribers access to simple Web content on their mobile handsets. Rather than requiring a new markup language like HDML or WML, i - mode browsers were capable of rendering pages written in C - HTML, which was simply a subset of the HTML v3.2 language common at the time. Although publishers were encouraged to build special C - HTML sites specifically for i - mode usage, they used their existing HTML knowledge and tools, which meant there was a much smaller barrier to getting sites online. That factor resulted in a huge number of publishers doing so.

Many things contributed to i - mode (and similar rival offerings from other carriers) becoming hugely popular in Japan. One was the reliability and consistency of the browsers and the networks; another was the way in which DoCoMo provided billing mechanisms that allowed site owners to take payments from users for various commercial services. Some also suggest that the relative lack of PC - based Web access in Japan at the time also drove i - mode to success; for many consumers, their mobile device was the easiest and quickest way to access Web content at all, so i - mode adoption grew phenomenally (rising to 40 million users in a mere four years following its launch).

Whatever the reasons, i - mode and other Japanese mobile web platforms were held in high esteem by the mobile industry elsewhere in the world. Very quickly, their ubiquitous use throughout Japan became a blueprint for what a successful mobile web might look like, and several European and Asian carriers endeavored to replicate its success by using exactly the same technologies in their own networks several years later. (Notably, most of these were unsuccessful, suggesting that the i - mode technology itself was not the main factor of the Japanese network ’ s success.)

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Sunday, 15 July 2012

Early Mobile Web Technologies

Posted on 08:22 by Unknown
One of the first companies to pioneer the concept of pull - based information services on a mobile device was Unwired Planet, based in California. Launched in 1996, the company produced a system called UP.Link, comprised of a software browser (UP.Browser) that ran on PDAs and mobile handsets, and a network - side gateway that would aid the browser in fetching and formatting sites written in the company ’ s proprietary markup language, HDML.

HDML was a card - based system for structuring content, and it bore little resemblance to HTML,
even in its simplest form. The basic principle was that the browser would retrieve a “ deck ” of such cards, allowing a user to look at a selection of related pages of information without the browser having to make additional requests to the server. The cards supported textual and basic image content, and allowed users to navigate through decks with simple links and soft - key anchors; it even initiated telephone calls.

In the U.S., AT & T ran a packet - switched data network called PocketNet, which was, at the time, one of the first consumer offerings to provide Web - like access on a mobile device. This service encouraged many early website owners to experiment with developing HDML - based sites for this niche U.S. market.

In 1997, Unwired Planet attempted, and failed, to get HDML adopted as a markup standard by the W3C, which would have been an important step in getting the technology widely used and used outside of the United States. However, in that year, Unwired Planet joined with Nokia and Ericsson (which had been developing Web - like markup languages of their own) to form the WAP Forum, a standards body that would go on to specify WAP and related standards. Much of the early structure of the resulting WML markup language came from the HDML syntax and concepts, and Unwired Planet adapted their infrastructure and browsers to support WAP, becoming a major worldwide vendor of browser and gateway products as a result.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Wednesday, 11 July 2012

THE INEVITABILITY OF A MOBILE WEB

Posted on 08:19 by Unknown
Your grandparents would probably recognize it as an archetypal scene from a science fiction book: Your protagonist, somewhere in the universe, pulls out a small handheld device, taps on it, and speaks. On the other side of the planet or spaceship upon which the action takes place, others receive the call, listen to the message, and begin to converse. It was not very long ago that wireless communication was the ultimate in futuristic high technology. As recently as 30 years ago, most people ’ s usage of telephones was relatively rare, costly, and short - distance. More importantly, it was utterly constrained by copper; you couldn’t make a call unless you were within a few meters of the handset. Only 15 years before that, most national and all international calls required an operator to patch calls through huge switchboard, cables and all.

In the late 1980s and 1990s, this started to change dramatically. Developments in radio and cellular technologies, coupled with the miniaturization and cheapening of computing hardware, enabled new possibilities: networks in which people could carry their telephone devices with them (or barely carry them, in the case of some of the early suitcase - sized models!), and, assuming they had sufficient radio coverage, place and receive calls while on the move.

Initially relying on analog technologies, and then through the creation and standardization of subsequent generations of digital technologies, these devices rapidly grew in number and fell in cost. At the same time, the cellular networks required to connect them grew in size, coverage, and interconnectedness. The cell phone became commonplace, even ubiquitous, and before you knew it, the constraints placed on where and when you could talk to friends and colleagues over the phone had been lifted.

Equipped with their miniature keyboards and screens, it was not long before other ways in which these small devices could be used started to emerge. The digital technologies used to transmit and receive voice were also perfectly capable of doing so for small amounts of data. Almost unintentionally, the GSM standard, for example, allowed users to send and receive short messages of 140 characters in length with their devices. By 2000, billions of such messages were being sent worldwide. Clearly the mobile device had the potential to be more than just a way to talk to others: It could be used as a device capable of sending and receiving data from other handsets, or indeed, central services.

The 1990s also saw the birth of the Web — a way in which computers could connect to the vast,
interconnected Internet and access pages of information residing on servers elsewhere, worldwide. Again, this had been an evolution from more primitive and simple technologies, but the Web burgeoned, thanks to factors such as the ease with which users could use browsers to navigate through content, the array of tools that made it easy for authors to create and publish content, and again, the decreasing cost and increasing power of computing hardware.

Buoyed by a dream of having the world ’ s knowledge and information formulated in an open way that humans could access it in dynamic and compelling ways, not to mention the prospects of being able to promote businesses and run commerce across the medium, the Web went from strength to strength, until by the end of the 1990s, it too was a powerful and familiar concept — at least in the developed world. With the benefit of hindsight, and noticing that two complementary concepts — the mobile phone and the Web — developed so significantly during the 1990s, it seems inevitable that at some point the telecoms and web industries would consider what it might mean to combine the two platforms.

For mobile networks and phone manufacturers, it meant the attraction of untethering people from their computers in the same way that they had been from their home telephones. For web and
software companies, reaching beyond the PC meant the opportunity to add hundreds of millions of new users to the Web. And for users, the idea of being able to access the vast array of information, content, and services — through their personal mobile device — would be the exciting realization of yet another chapter from science fiction. The idea, at least, of the mobile web was born.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal
Read More
Posted in Mobile Web | No comments

Friday, 6 July 2012

The TPC

Posted on 08:40 by Unknown
As noted earlier, the first attempts at standardizing a systems-level benchmark took place in the second half of the 1980s, with the introduction of the debit/credit benchmark, or TP1.

To mitigate deficiencies in the definition of this benchmark and to establish a rigorous basis for systems comparison, a group of computer systems vendors and DBMS vendors formed the Transaction Processing Council (TPC). The TPC aimed to both define a benchmark for transaction-processing systems and to specify rigorous publication rules for the results. Each TPC member is committed to obeying the rules, which require results to be accompanied by publication of a detailed report. Reports are subject to audit.

TPC then published a new benchmark to characterize decision support applications, and then, later, one for Web-based transactional applications. The history of TPC benchmarks can be summarized by the following list (benchmarks shown in bold were current at the beginning of 2002):

» TPC-A (1989): a simple transaction which does one update to a bank
account
» TPC-B (1990): the “database portion” of TPC-A
» TPC-C (1992): a benchmark involving multiple, complex transactions
» TPC-D (1995): decision support
» TPC-H (1999): decision support
» TPC-R (1999): decision support
» TPC-W (2000): Web-based transaction processing (e.g., electronic commerce, B2B, etc.)

The TPC benchmarks allow systems to be compared in two ways:
» Performance (for example, number of transactions per unit of time)
» Price/performance (for example, total cost of ownership over a threeyear period per transaction per minute)

The benchmark is not specified by source code, and so it is the responsibility of each TPC member who wishes to characterize a system to implement the benchmarks for those systems.

The TPC does not measure systems itself.

As far as the benchmarks are concerned, system cost comprises the cost of acquiring the system from the vendor (hardware and software) along with the costs of maintenance for three years. The transactions called for must be implemented properly, respecting the ACID properties of atomicity, coherence, insulation and durability.

Information on the activities and the standards issued by the TPC, as well as the published results of measurements, is available on their Web site at http:// www.tpc.org.

Source of Information : Elsevier Server Architectures
Read More
Posted in Server Architectures | No comments

Tuesday, 3 July 2012

Numbering the Operations Master (OM) Roles

Posted on 08:21 by Unknown
Most domain controller functionality in Windows Server 2000, Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 was designed to be distributed, multimaster-based. This effectively eliminated the single point of failure that was present with Windows NT primary domain controllers (PDCs). However, five functions still require the use of a single server because their functionality makes it impossible to follow a distributed approach. These Operations Master (OM) roles (previously referred to as FSMO roles) are outlined as follows:

. Schema master—There is only one writable master copy of the AD DS schema in a single AD DS forest. It was deliberately designed this way to limit access to the schema and to minimize potential replication conflicts. There can be only one schema master in the entire AD DS forest.

. Domain naming master—The domain naming master is responsible for the addition of domains into the AD DS forest. This OM role must be placed on a global catalog server because it must have a record of all domains and objects to perform its function. There can be only one domain naming master in a forest.

. PDC emulator—This role used to exist to emulate the legacy Windows NT 4.0 primary domain controller (PDC) for down-level clients. With Windows Server 2008 R2, the PDC emulator still performs certain roles, such as acting as the primary time sync server for the domain. There is one PDC emulator FSMO role per AD DS domain.

. RID master—All objects within AD DS that can be assigned permissions are uniquely identified through the use of a security identifier (SID). Each SID is composed of a domain SID, which is the same for each object in a single domain, and a relative identifier (RID), which is unique for each object within that domain. When assigning SIDs, a domain controller must be able to assign a corresponding RID from a pool that it obtains from the RID master. When that pool is exhausted, it requests another pool from the RID master. If the RID master is down, you might not be able to create new objects in your domain if a specific domain controller runs out of its allocated pool of RIDs. There is one RID master per AD DS domain.

. Infrastructure master—The infrastructure master manages references to domain objects not within its own domain. In other words, a DC in one domain contains a list of all objects within its own domain, plus a list of references to other objects in other domains in the forest. If a referenced object changes, the infrastructure master handles this change. Because it deals with only referenced objects and not copies of the object itself, the infrastructure master must not reside on a global catalog server in multiple domain environments. The only exceptions to this are if every domain controller in your domain is a global catalog server or if you are in a single-domain environment. In the first case, there is no need to reference objects in other domains because full copies are available. In the second case, the infrastructure master role is not utilized because all copies of objects are local to the domain.

Transfer of an OM role to another domain controller can be performed as part of regular maintenance, or in the case of a disaster recovery situation where an OM server is brought offline, the OM can be seized to be brought back online. This is true for conditions where the schema master, domain naming master, PDC emulator, infrastructure master, or RID master either needs to be moved to another system (transfer) or has gone down and no backup is available (seized). The transfer and seizure of an OM role is done through the use of a command-line tool called ntdsutil, shown in Figure 4.4. Keep in mind that you should use this utility only in emergency situations and should never bring the old OM server that has had its role seized back online into the domain at risk of some serious system conflicts.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Saturday, 30 June 2012

Defining the Lightweight Directory Access Protocol (LDAP)

Posted on 08:16 by Unknown
The Directory Service Protocol that is utilized by AD DS is based on the Internet-standard Lightweight Directory Access Protocol defined by RFC-3377. LDAP allows queries and updates to take place in AD DS. Objects in an LDAP-compliant directory must be uniquely identified by a naming path to the object. These naming paths take two forms: distinguished names and relative distinguished names.


Explaining Distinguished Names in AD
The distinguished name of an object in AD DS is represented by the entire naming path that the object occupies in AD DS. For example, the user named Brian McElhinney can be represented by the following distinguished name:

CN=Brian McElhinney,OU=Sydney,DC=Companyabc,DC=com

The CN component of the distinguished name is the common name, which defines an object within the directory. The OU portion is the organizational unit in which the object belongs. The DC components define the DNS name of the Active Directory domain.


Outlining Relative Distinguished Names
The relative distinguished name of an object is basically a truncated distinguished name that defines the object’s place within a set container. For example, take a look at the following object:

OU=Sydney,DC=companyabc,DC=com

This object would have a relative distinguished name of OU=Sydney. The relative distinguished name in this case defines itself as an organizational unit within its current domain container.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Tuesday, 26 June 2012

Conceptualizing the AD DS Schema

Posted on 08:10 by Unknown
The AD DS schema is a set of definitions for all object types in the directory and their related attributes. The schema determines the way that all user, computer, and other object data are stored in AD DS and configured to be standard across the entire AD DS structure. Secured by the use of discretionary access control lists (DACLs), the schema controls the possible attributes to each object within AD DS. In a nutshell, the schema is the basic definition of the directory itself and is central to the functionality of a domain environment. Care should be taken to delegate schema control to a highly selective group of administrators because schema modification affects the entire AD DS environment.


Schema Objects
Objects within the AD DS structure such as users, printers, computers, and sites are defined in the schema as objects. Each object has a list of attributes that define it and that can be used to search for that object. For example, a user object for the employee named Weyland Wong will have a FirstName attribute of Weyland and a LastName attribute of Wong. In addition, there might be other attributes assigned, such as departmental name, email address, and an entire range of possibilities. Users looking up information in AD DS can make queries based on this information, for example, searching for all users in the Sales department.


Extending the Schema
One of the major advantages to the AD DS structure is the ability to directly modify and extend the schema to provide for custom attributes. A common attribute extension occurs with the installation of Microsoft Exchange Server, which extends the schema, more than doubling it in size. An upgrade from Windows Server 2003 or Windows Server 2008 AD to Windows Server 2008 R2 AD DS also extends the schema to include attributes specific to Windows Server 2008 R2. Many third-party products have their own schema extensions as well, each providing for different types of directory information to be displayed.


Performing Schema Modifications with the AD DS Service Interfaces
An interesting method of actually viewing the nuts and bolts of the AD DS schema is by using the AD DS Service Interfaces (ADSI) utility. This utility was developed to simplify access to the AD DS and can also view any compatible foreign LDAP directory. The ADSIEdit utility, enables an administrator to view, delete, and modify schema attributes. Great care should be taken before schema modifications are undertaken because problems in the schema can be difficult to fix.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Sunday, 24 June 2012

Outlining Functional Levels in Windows Server 2008 R2 AD DS

Posted on 08:06 by Unknown
Just as Windows 2000 and Windows 2003 had their own functional levels that ensured down-level compatibility with legacy domain versions, Windows Server 2008 R2 has its own functional levels that are used to maintain compatibility. The following functional levels exist in Windows Server 2008 R2:

. Windows 2000 Native functional level—This functional level allows Windows Server 2008 R2 domain controllers to coexist with both Windows 2000 SP3+ and Windows 2003 domain controllers within a forest.

. Windows Server 2003 functional level—This functional level allows Windows 2003 and Windows Server 2008 R2 domain controllers to coexist. Additional functionality is added to the forest, including cross-forest transitive trust capabilities and replication enhancements.

. Windows Server 2008 functional level—In this functional level, all domain controllers must be running Windows Server 2008 or later. Changing the domain and forest functional level to Windows Server 2008 adds additional functionality, such as fine-grained password policies.

. Windows Server 2008 R2 functional level—In this functional level, all domain controllers must be running Windows Server 2008 R2. Changing the forest functional level to this latest AD DS level grants Windows Server 2008 R2 feature functionality, such as access to the Active Directory Recycle Bin.

By default, a fresh installation of Active Directory on Windows Server 2008 R2 domain controllers allows you to choose which functional level you want to start the forest in. If an existing forest is in place, it can be brought to Windows Server 2008 R2 functional level by performing the following steps:

1. Ensure that all domain controllers in the forest are upgraded to Windows Server 2008 R2 or replaced with new Windows Server 2008 R2 DCs.

2. Open Active Directory Domains and Trusts from the Administrative Tools menu on a domain controller.

3. In the left scope pane, right-click on the domain name, and then click Raise Domain Functional Level.

4. In the box labeled Raise Domain Functional Level, select Windows Server 2008 R2, and then click Raise.

5. Click OK and then click OK again to complete the task.

6. Repeat steps 1–5 for all domains in the forest.

7. Perform the same steps on the root node of Active Directory Domains and Trusts, except this time choose Raise Forest Functional Level and follow the prompts.

When all domains and the forest level have been raised to Windows Server 2008 R2 functionality, the forest can take advantage of the latest AD DS functionality, such as the Active Directory Recycle Bin, outlined in more detail later in this chapter. Remember, before you accomplish this task, Windows Server 2008 R2 essentially operates in a downgraded mode of compatibility.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Wednesday, 20 June 2012

Examining the Key Features of Active Directory Domain Services

Posted on 08:00 by Unknown
Five key components are central to AD DS’s functionality. As compatibility with Internet standards has become required for new directory services, the existing implementations have adjusted and focused on these areas:

. TCP/IP compatibility—Unlike some of the original proprietary protocols such as IPX/SPX and NetBEUI, the Transmission Control Protocol/Internet Protocol (TCP/IP) was designed to be cross-platform. The subsequent adoption of TCP/IP as an Internet standard for computer communications has propelled it to the forefront of the protocol world and essentially made it a requirement for enterprise operating systems. AD DS and Windows Server 2008 R2 utilize the TCP/IP protocol stack as their primary method of communications.

. Lightweight Directory Access Protocol support—The Lightweight Directory Access Protocol (LDAP) has emerged as the standard Internet directory protocol and is used to update and query data within the directory. AD DS directly supports LDAP.

. Domain name system (DNS) support—DNS was created out of a need to translate simplified names that can be understood by humans (such as www.cco.com) into an IP address that is understood by a computer (such as 12.155.166.151). The AD DS structure supports and effectively requires DNS to function properly.

. Security support—Internet standards-based security support is vital to the smooth functioning of an environment that is essentially connected to millions of computers around the world. Lack of strong security is an invitation to be hacked, and Windows Server 2008 R2 and AD DS have taken security to greater levels. Support for IP Security (IPSec), Kerberos, Certificate Authorities, and Secure Sockets Layer (SSL) encryption is built in to Windows Server 2008 R2 and AD DS.

. Ease of administration—Although often overlooked in powerful directory services implementations, the ease in which the environment is administered and configured directly affects the overall costs associated with its use. AD DS and Windows Server 2008 R2 are specifically designed for ease of use to lessen the learning curve associated with the use of a new environment. Windows Server 2008 R2 also enhanced AD DS administration with the introduction of the Active Directory Administration Center, Active Directory Web Services, and an Active Directory module for Windows PowerShell command-line administration.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Friday, 15 June 2012

Examining the Evolution of Directory Services

Posted on 07:41 by Unknown
Directory services have existed in one form or another since the early days of computing to provide basic lookup and authentication functionality for enterprise network implementations. A directory service provides detailed information about a user or object in a network, much in the same way that a phone book is used to look up a telephone number for a provided name. For example, a user object in a directory service can store the phone number, email address, department name, and as many other attributes as an administrator desires.

Directory services are commonly referred to as the white pages of a network. They provide user and object definition and administration. Early electronic directories were developed soon after the invention of the digital computer and were used for user authentication and to control access to resources. With the growth of the Internet and the increase in the use of computers for collaboration, the use of directories expanded to include basic contact information about users. Examples of early directories included MVS PROFS (IBM), Grapevine’s Registration Database, and WHOIS.

Application-specific directory services soon arose to address the specific addressing and contact-lookup needs of each product. These directories were accessible only via proprietary access methods and were limited in scope. Applications utilizing these types of directories were programs such as Novell GroupWise, Lotus Notes, and the UNIX sendmail /etc/aliases file.

The further development of large-scale enterprise directory services was spearheaded by Novell with the release of Novell Directory Services (NDS) in the early 1990s. It was adopted by NetWare organizations and eventually was expanded to include support for mixed NetWare/NT environments. The flat, unwieldy structure of NT domains and the lack of synchronization and collaboration between the two environments led many organizations to adopt NDS as a directory service implementation. It was these specific deficiencies in NT that Microsoft addressed with the introduction of AD DS.

The development of the Lightweight Directory Access Protocol (LDAP) corresponded with the growth of the Internet and a need for greater collaboration and standardization. This nonproprietary method of accessing and modifying directory information that fully utilized TCP/IP was determined to be robust and functional, and new directory services implementations were written to utilize this protocol. AD DS itself was specifically designed to conform to the LDAP standard.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Tuesday, 12 June 2012

Performing a Live Migration

Posted on 07:31 by Unknown
The virtual machine runs on one of the cluster nodes, known as the owner. When a Live Migration is performed, multiple steps are performed. These steps can be broken down into three stages: preflight migration, virtual machine transfer, and final transfer/startup of the virtual machine.

The first step in Live Migration occurs on the source node (where the virtual machine is currently running) and the target node (where the virtual machine will be moved) to ensure that migration can, in fact, occur successfully.

The detailed steps of Live Migration are as follows:
1. Identify the source and destination machines.

2. Establish a network connection between the two nodes.

3. The preflight stage begins. Check if the various resources available are compatible between the source and destination nodes:
. Are the processors using similar architecture? (For example, a virtual machine running on an AMD node cannot be moved to an Intel node, and vice versa.)
. Are there a sufficient number of CPU cores available on the destination?
. Is there sufficient RAM available on the destination?
. Is there sufficient access to required shared resources (VHD, network, and so on)?
. Is there sufficient access to physical device resources that must remain associated
with the virtual machine after migration (CD drives, DVDs, and LUNs or
offline disks)?
Migration cannot occur if there are any problems in the preflight stage. If there are, the virtual machine will remain on the source node and processing ends here. If preflight is successful, migration can occur and the virtual machine transfer continues.

4. The virtual machine state (inactive memory pages) moves to the target node to reduce the active virtual machine footprint as much as possible. All that remains on the source node is a small memory working set of the virtual machine. The virtual machine configuration and device information are transferred to the destination node and the worker process is created. Then, the virtual machine memory is transferred to the destination while the virtual machine is still running. The cluster service intercepts memory writes and tracks actions that occur during themigration. This page will be retransmitted later. Up to this point, the virtual
machine technically remains on the source node.

5. What remains of the virtual machine is briefly paused on the source node. The virtual
machine working set is then transferred to the destination host, storage access is
moved to the destination host, and the virtual machine is reset on the destination
host.

The only downtime on the virtual machine occurs in the last step, and this outage is
usually much less than most network applications are designed to tolerate. For example,
an administrator can be accessing the virtual machine via Remote Desktop while it is
being Live Migrated and will not experience an outage. Or a virtual machine could be
streaming video to multiple hosts, Live Migrated to another node, and the end users don’t know the difference.

Use the following steps to perform a Live Migration between two cluster nodes:
1. On one of the cluster nodes, open Failover Cluster Management.

2. Expand the Cluster and select Services and Applications.

3. Select the virtual machine to Live Migrate.

4. Click Live Migrate Virtual Machine to Another Node in the Actions pane and select the node to move the virtual machine to. The virtual machine will migrate to the selected node using the process described previously.

If there are processor differences between the source and destination node, Live Migration will display a warning that the CPU capabilities do not match. To perform a Live Migration, you must shut down the virtual machine and edit the settings of the processor to “Migrate to a Physical Computer with a Different Processor Version”.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Saturday, 9 June 2012

Quick Migration and Live Migration

Posted on 07:25 by Unknown
There are two forms of automated migration provided by Windows Server 2008 R2 Hyper-V: Quick Migration and Live Migration. These migration processes can be used to increase service availability for planned and unplanned downtime. Although both technologies achieve the same thing—moving virtual servers between Hyper-V hosts—they use different methods and mechanisms to achieve it. Both require at least two Hyper-V host servers in a cluster, attached to the same shared storage system. Usually, the shared storage is an iSCSI or Fibre Channel storage area network (SAN).


Quick Migration
Quick Migration provides a way to quickly move a virtual machine from one host server to another with a small amount of downtime.

In a Quick Migration, the guest virtual machine is suspended on one host and resumed on another host. This operation happens in the time it takes to transfer the active memory of the virtual machine over the network from the first host to the second host. For a host with 8GB of RAM, this might take about two minutes using a 1GB iSCSI connection.

Quick Migration was the fastest migration available for Windows Server 2008 Hyper-V. Microsoft made considerable investments in Hyper-V migration technologies, trying to reduce the time required to migrate virtual machines between Hyper-V hosts. The result was Live Migration, which has the same hardware requirements as Quick Migration, but with a near instantaneous failover time.


Live Migration
Since the release of Hyper-V V1 with Windows Server 2008, the number-one mostrequested feature by customers is the ability to migrate running virtual machines between hosts, with no downtime. VMware’s VMotion has been able to do this for some time. Finally, with Windows Server 2008 R2, it can be done natively with Hyper-V for no extra cost. This makes it a compelling reason to move to Hyper-V.

Live Migration uses failover clustering. The quorum model used for your cluster will depend on the number of Hyper-V nodes in your cluster. In this example, we will use two Hyper-V nodes in a Node and Disk Majority Cluster configuration. There will be one shared storage LUN used as the cluster quorum disk and another used as the Cluster Shared Volume (CSV) disk.


If there is only one shared storage LUN available to the nodes when the cluster is formed, Windows will allocate that LUN as the cluster quorum disk and it will not be available to be used as a CSV disk.

Source of Information : Sams - Windows Server 2008 R2 Unleashed
Read More
Posted in Windows Server 2008 | No comments

Tuesday, 5 June 2012

Various RAID Levels

Posted on 08:54 by Unknown
» For RAID 0 (Data Striping), the cost of storage is higher than for a single disk (assuming that a single disk has sufficient capacity) since using several disks (regardless of their ability to provide more storage capacity than a single disk) increases costs for items such as racks, cables, controllers, power. Data availability is lower than for a single
disk, because MTBF for the RAID is the MTBF of a single disk divided by the number of disks used—that is, a RAID 0 of N disks has an MTBF N times smaller than the MTBF of a single disk. Reading and writing large blocks on a RAID 0 using N disks takes less time than for a single disk (at best N times less, limited by the fact that the disks are not in general rotationally synchronized). This reduces the occupation time of the disks and allows higher bandwidths. The same is true for random reads and writes.


» For RAID 1 (Mirroring), the storage cost is proportional to the number of copies of the data kept (the factor M in the table). Most often, mirroring is simple replication (M = 2). As to availability, it is clear that RAID 1 has higher availability than RAID 3 or RAID 5, since it has complete data replication rather than a parity disk per N physical disks. Reading, whether of large transfers or random transfers, has higher performance because the data can be read concurrently from multiple disks. Concurrency is less effective for writes, whether for large transfers or random transfers, because of the need to not signal completion until the last write on the last disk is complete.


» RAID 0 + 1 (Striped Mirror) has more or less the same properties as RAID 1, with just one further comment on write operations: the time for write operations for large transfers can be lower than for a single disk, if the time saved as a result of distributing the data across N parallel disks is greater than the extra cost of synchronizing completion across M groups of disks.


» RAID 3 (Parity Disk) availability is ensured through the use of parity information. Large block reads offer similar performance to RAID 0, with any differences attributable to the need to compute parity for the information read, along with any required correction. Large block writes are slower, because such transfers involve both the calculation of parity and writing the parity values to the parity disk, whose busy time can be greater than those of the collection of data disks, since there is just one parity disk. Random reads require a parity disk access, calculation of data parity, parity comparison, and any necessary correction. A write operation implies calculation of parity and its writing to the parity disk. Performance compared with a single disk depends on the performance advantage obtained by distributing the data across multiple disks.


» RAID 5 (Spiral Parity) offers essentially the same availability as RAID 3. Again, large transfer performance is impacted by the need to calculate parity and apply correction as required. Random reads and writes are generally better than for RAID 3 because of the distribution of parity information over multiple disks, reducing contention on parity updates.


» RAID 6 (Double Spiral Parity) provides higher availability than RAID 5, since it can survive two concurrent independent failures. RAID 6 has slightly higher read performance than RAID 5, since double parity reduces contention and thus wait time for parity writes (only slightly higher performance, since the number of disks grows only from N + 1 to N + 2). Write operations, on the other hand, are slower, suffering from the increased burden of double parity computation and writing.

Source of Information : Elsevier Server Architectures
Read More
Posted in Server Architectures | No comments

Saturday, 2 June 2012

Common Internet File System

Posted on 08:39 by Unknown
We would be remiss in our descriptions of remote access file systems were we to omit mention of CIFS, which is used in Windows systems for remote file access.

CIFS is an improved version of Microsoft’s SMB (Server Message Block); proposed by Microsoft, CIFS was offered to the IETF (Internet Engineering Task Force) for adoption as a standard.

CIFS, installed on a PC, allows that PC access to data held on UNIX systems.

There is an important difference between NFS and CIFS. NFS is stateless, while CIFS is stateful.

This means that an NFS server does not need to maintain any state information on its clients, but a CIFS server must. Thus, in the event of a failure in either the network or the server, recovery is much more complex for a CIFS server than for an NFS server. NLM (Network Lock Manager) was provided to implement lock operations in NFS, but its use is not widespread. Version 4 of NFS supports locking.

Examples of products implementing CIFS include:
» Samba (free software);
» ASU (Advanced Server UNIX) from AT&T
» TAS (TotalNET Advanced Server) from Syntax

UNIX file systems need extensions to support Windows file semantics; for example, the “creation date” information needed by Windows and CIFS must be kept in a UNIX file system in a complementary file.


This diagram follows our practice of omitting some components for simplicity. We do not show the TLI (Transport Layer Interface) nor the NDIS (Network Driver Interface Layer), for example, nor do we show local accesses on the server. NTFS (NT File System) is the Windows 2000 native file system.

The I/O manager determines whether an access is local or remote; the request is either directed to the local file system or handled by the CIFS Redirector. This checks to see whether the data is available in the local cache and, if not, passes the request on to the network layers for forwarding to the server holding the file involved.

Source of Information : Elsevier Server Architectures
Read More
Posted in Server Architectures | No comments

Tuesday, 29 May 2012

Parallel File Systems

Posted on 08:28 by Unknown
In cluster environments and MPPs, some file systems have been optimized to take advantage of the processor and memory resources represented by the nodes forming the cluster or MPP.

IBM’s General Parallel File System (GPFS) is an example of a parallel file system; it can be used on AIX clusters (HACMP), MPPs (IBM SP) or Linux clusters. Our description of GPFS is based on [HEG01]. GPFS’s major characteristics are:

» A clusterized file management system allowing transparent cluster file access (that is, program running on any node transparently accesses files, even if they are stored on another node)

» Scalability: GPFS has the ability to make use of processor, memory (used as a disk cache), and I/O resources of the nodes

» Failure-tolerant: GPFS provides journaling for metadata changes and data replication

In the SP implementation, GPFS is built on a software layer called Virtual Shared Disk (VSD). VSD allows disk blocks to be routed over a network, either an IP network or the interconnect network of an MPP. To this extent, VSD can be looked upon as a form of SAN (Storage Area Network), which we will discuss later.

GPFS is installed on the system nodes; it is possible to configure some nodes as specialist storage nodes. Data is shared by the applications running on the nodes provided with GPFS instances. Data is cached on the client nodes.

GPFS distributes data over the available disk, providing an effect similar to data-striping, which we will discuss later in the section on RAID systems.

Apart from AIX, the major software components of this architecture are the components of PSSP (Parallel System Support Programs), which are specific to the SP environment. The major elements are:

» VSD: Virtual Shared Disk, which provides the ability to access logical volumes as if they were local to the accessing node.

» GS: Group Services, which provides notification on the event of failure of a node or process, along with recovery of programs executing on the failing nodes on surviving ones. These services also initialize information necessary to VSD’s operation.

» RVSD: Recoverable Virtual Shared Disk, which makes it possible to prevent access by a node to certain disks during recovery phases of the node.

AIX also includes a component called VFS (Virtual File System), which allows applications’ file access requests to be directed to the appropriate file system (e.g., JFS (AIX’s journaled file system) or GPFS) transparently, depending on the type of the file.

Source of Information : Elsevier Server Architectures
Read More
Posted in Server Architectures | No comments

Saturday, 26 May 2012

IEEE 802.11n

Posted on 08:03 by Unknown
In response to growing market demand for higher-performance WLANs, the IEEE formed the task group 802.11n. The scope of this task group is to defi ne modifi cations to the PHY and MAC layer to deliver a minimum of 100 Mbps throughput at the MAC service AP (SAP).

802 .11n employs an evolutionary philosophy reusing existing technologies where practical, while introducing new technologies where they provide effective performance improvements to meet the needs of evolving applications. Reuse of legacy technologies such as OFDM, FEC coding, interleaving, and quadrature amplitude modulation mapping have been maintained to keep costs down and ease backward compatibility.

There are three key areas that need to be considered when addressing increases in WLAN performance. First, improvements in radio technology are needed to increase the physical transfer rate. Second, new mechanisms implementing the effective management of enhanced PHY performance modes must be developed. Third, improvements in data transfer efficiency are needed to reduce the improvements achieved with an increase in physical transfer rate.

The emerging 802.11n specification differs from its predecessors in that it provides for a variety of optional modes and configurations that dictate different maximum raw data rates. This enables the standard to provide baseline performance parameters for all 802.11n devices, while allowing manufacturers to enhance or tune capabilities to accommodate different applications and price points. WLAN hardware does not need to support every option to be compliant with the standard.

The first requirement is to support an OFDM implementation that improves upon the one
employed in 802.11a/g standards, using a higher maximum code rate and slightly wider bandwidth. This change improves the highest attainable raw data rate to 65 Mbps from
54 Mbps in the existing standards.

Multi -input, multi-output (MIMO) technology is used in 802.11n to evolve the existing OFDM physical interface presently implemented with legacy 802.11a/g. MIMO harnesses multipath with a technique known as space-division multiplexing (SDM). The transmitting WLAN device splits a data stream into multiple parts, called spatial streams, and transmits each spatial stream through separate antennas to corresponding antennas on the receiving end. The current 802.11n provides for up to four spatial streams, even though compliant hardware is not required to support that many.

Doubling the number of spatial streams from one to two effectively doubles the raw data rate. There are trade-offs, however, such as increased power consumption and, to a lesser extent, cost. The 802.11n specification includes an MIMO power-save mode, which mitigates power consumption by using multiple paths only when communication would benefit from the additional performance. The MIMO power-save mode is a required feature in the 802.11n specification.

There are two features in the specification that focus on improving MIMO performance: (1) beam-forming and (2) diversity. Beam-forming is a technique that focuses radio signals directly on the target antenna, thereby improving range and performance by limiting interference. Diversity exploits multiple antennas by combining the outputs of or selecting the best subset of a larger number of antennas than required to receive a number of spatial streams. The 802.11n specification supports up to four antennas.

Another optional mode in the 802.11n effectively doubles data rates by doubling the width of a WLAN communications channel from 20 to 40 MHz. The primary trade-off is fewer channels available for other devices. In the case of the 2.4-GHz band, there is enough room for three nonoverlapping 20-MHz channels. A 40-MHz channel does not leave much room for other devices to join the network or transmit in the same air space. This means intelligent, dynamic management is critical to ensuring that the 40-MHz channel option improves overall WLAN performance by balancing the high-bandwidth demands of some clients with the needs of other clients to remain connected to the network.

One of the most important features in the 802.11n specifi cation to improve mixed-mode performance is aggregation. Rather than sending a single data frame, the transmitting client bundles several frames together. Thus, aggregation improves effi ciency by restoring the percentage of time that data is being transmitted over the network.

The 802.11n specification was developed with previous standards in mind to ensure compatibility with more than 200 million Wi-Fi (802.11b) devices currently in use. An 802.11n AP will communicate with 802.11a devices on the 5-GHz band as well as 802.11b and 802.11g hardware on 2.4-GHz frequencies. In addition to basic interoperability between devices, 802.11n provides for greater network efficiency in mixed mode over what 802.11g offers.

Because it promises far greater bandwidth, better range, and reliability, 802.11n is advantageous in a variety of network configurations. And as emerging networking applications take hold in the home, a growing number of consumers will view 802.11n not just as an enhancement to their existing network, but as a necessity. Some of the current and emerging applications that are driving the need for 802.11n are voice over IP (VoIP), streaming video and music, gaming, and network attached storage.

Source of Information : Elsevier Wireless Networking Complete
Read More
Posted in Wireless | No comments

Wednesday, 23 May 2012

IEEE 802.11b — High-Rate DSSS

Posted on 07:38 by Unknown
In September 1999, IEEE ratified the 802.11b high-rate amendment to the standard, which added two higher speeds (5.5 and 11 Mbps) to 802.11. The key contribution of the 802.11b addition to the WLAN standard was to standardize the PHY support to two new speeds, 5.5 and 11 Mbps. To accomplish this, DSSS was selected as the sole PHY technique for the standard, since frequency hopping (FH) cannot support the higher speeds without violating current FCC regulations. The implication is that the 802.11b system will interoperate with 1 and 2 Mbps 802.11 DSSS systems, but will not work with 1 and 2 Mbps FHSS systems.

The original version of the 802.11 specifi es in the DSSS standard an 11-bit chipping, called a Barker sequence, to encode all data sent over the air. Each 11-chip sequence represents a single data bit (1 or 0), and is converted to a waveform, called a symbol, that can be sent over the air. These symbols are transmitted at a one million symbols per second (Msps) rate using binary phase shift keying (BPSK). In the case of 2 Mbps, a more sophisticated implementation based on quadrature phase shift keying (QPSK) is used. This doubles the data rate available in BPSK, via improved efficiency in the use of the radio bandwidth.

To increase the data rate in 802.11b standard, advanced coding techniques are employed. Rather than the two 11-bit Barker sequences, 802.11b specifi es complementary code keying (CCK). CCK allows for multichannel operation in the 2.4 GHz band by using existing 1 and 2 Mbps DSSS channelization schemes. CCK consists of a set of 64 8-bit code words. As a set, these code words have unique mathematical properties that allow them to be correctly distinguished from one another by a receiver even in the presence of substantial noise and multipath interference. The 5.5 Mbps rate uses CCK to encode four bits per carrier, while the 11 Mbps rate encodes eight bits per carrier. Both speeds use QPSK modulation and a signal at 1.375 Msps. This is how the higher data rates are obtained.

To support very noisy environments as well as extended ranges, 802.11b WLANs use dynamic rate shifting, allowing data rates to be automatically adjusted to compensate for the changing nature of the radio channel. Ideally, users connect at a full 11 Mbps rate. However, when devices move beyond the optimal range for 11 Mbps operation, or if substantial interference is present, 802.11b devices will transmit at lower speeds, falling
back to 5.5, 2, and 1 Mbps. Likewise, if a device moves back within the range of a higherspeed transmission, the connection will automatically speed up again. Rate shifting is a PHY mechanism transparent to the user and upper layers of the protocol stack.

Source of Information : Elsevier Wireless Networking Complete
Read More
Posted in Wireless | No comments

Friday, 18 May 2012

Security of IEEE 802.11 Systems

Posted on 07:30 by Unknown
The IEEE 802.11 provides for MAC access control and encryption mechanisms. Earlier,
the wire line equivalent privacy (WEP) algorithm was used to encrypt messages. WEP uses a Rivest Cipher 4 (RC4) pseudo-random number generator with two key structures
of 40 and 128 bits. Because of the inherent weaknesses of the WEP, the IEEE 802.11i committee developed a new encryption algorithm and worked on the enhanced security and authentication mechanisms for 802.11 systems.

For access control, ESSID (also known as a WLAN service area ID) is programmed into each AP and is required in order for a wireless client to associate with an AP. In addition, there is provision for a table of MAC addresses called an access control list to be included in the AP, restricting access to stations whose MAC addresses are not on the list.

Beyond layer-2, 802.11 WLANs support the same security standards supported by other 802 LANs for access control (such as network operating system logins) and encryption (such as IPSec or application-level encryption). These higher-level technologies can be used to create end-to-end secure networks encompassing both wired LAN and WLAN components, with the wireless piece of the network gaining additional security from the IEEE 802.11i feature set.

Source of Information : Elsevier Wireless Networking Complete
Read More
Posted in Wireless | No comments

Tuesday, 15 May 2012

WLAN Technologies

Posted on 07:22 by Unknown
The technologies available for use in a WLAN include IR, UHF (narrowband), and SS implementation. Each implementation comes with its own set of advantages and limitations.


IR Technology
IR is an invisible band of radiation that exists at the lower end of the visible electromagnetic spectrum. This type of transmission is most effective when a clear line-of-sight exists between the transmitter and the receiver.

Two types of IR WLAN solutions are available: diffused-beam and direct-beam (or line-of-sight). Currently, direct-beam WLANs offer a faster data rate than the diffused-beam networks. Direct-beam is more directional since diffused-beam technology uses reflected rays to transmit/receive a data signal. It achieves lower data rates in the 1 – 2 Mbps range.

IR is a short-range technology. When used indoors, it can be limited by solid objects such
as doors, walls, merchandise, or racking. In addition, the lighting environment can affect
signal quality. For example, loss of communication may occur because of the large amount of sunlight or background light in an environment. Fluorescent lights also may contain large amounts of IR. This problem may be solved by using high signal power and an optimal bandwidth filter, which reduces the IR signals coming from an outside source. In an outdoor environment, snow, ice, and fog may affect the operation of an IR-based system.


UHF Narrowband Technology
UHF wireless data communication systems have been available since the early 1980s.
These systems normally transmit in the 430 – 470 MHz frequency range, with rare systems using segments of the 800 MHz range. The lower portion of this band — 430 – 450 MHz — is referred to as the unprotected (unlicensed), and 450 – 470 MHz is referred to as the protected (licensed) band. In the unprotected band, RF licenses are not granted for specific frequencies and anyone is allowed to use any frequencies, giving customers some assurance that they will have complete use of that frequency.

Because independent narrowband RF systems cannot coexist on the same frequency, government agencies allocate specific RFs to users through RF site licenses. A limited amount of unlicensed spectrum is also available in some countries. In order to have many
frequencies that can be allocated to users, the bandwidth given to a specific user is very small.

The term narrowband is used to describe this technology because the RF signal is sent in a very narrow bandwidth, typically 12.5 or 25 kHz. Power levels range from 1 to 2 W for
narrowband RF data systems. This narrow bandwidth combined with high power results in larger transmission distances than are available from 900 MHz or 2.4 GHz SS systems, which have lower power levels and wider bandwidths. Table 5.4 lists the advantages and disadvantages of UHF technology.

Many modern UHF systems are synthesized radio technology. This refers to the way channel frequencies are generated in the radio. The crystal-controlled products in legacy UHF products require factory installation of unique crystals for each possible channel frequency. Synthesized radio technology uses a single, standard crystal frequency and drives the required channel frequency by dividing the crystal frequency down to a small value, then multiplying it up to the desired channel frequency. The division and multiplication factors are unique for each desired channel frequency, and are programmed into digital memory in the radio at the time of manufacturing. Synthesized UHF-based solutions provide the ability to install equipment without the complexity of hardware crystals. Common equipment can be purchased and specific UHF frequency used for each device can be tuned based upon specific location requirements. Additionally, synthesized UHF radios do not exhibit the frequency drift problem experienced in crystal-controlled UHF radios.

Modern UHF systems allow APs to be individually configured for operation on one of the several preprogrammed frequencies. Terminals are programmed with a list of all frequencies used in the installed APs, allowing them to change frequencies when roaming. To increase throughput, APs may be installed with overlapping coverage but use different frequencies.


Spread Spectrum Technology
Most WLANs use SS technology, a wideband RF technique that uses the entire allotted
spectrum in a shared fashion as opposed to dividing it into discrete private pieces (as with
narrowband). The SS system spreads the transmission power over the entire usable spectrum. This is obviously a less efficient use of the bandwidth than the narrowband approach. However, SS is designed to trade off bandwidth efficiency for reliability, integrity, and security. The bandwidth trade-off produces a signal that is easier to detect, provided that the receiver knows the parameters of the SS signal being broadcast. If the receiver is not tuned to the right frequency, a SS signal looks like background noise.

By operating across a broad range of radio frequencies, a SS device could communicate clearly despite interference from other devices using the same spectrum in the same physical location. In addition to its relative immunity to interference, SS makes eavesdropping and jamming inherently difficult.

In commercial applications, SS techniques currently offer data rates up to 2 Mbps. Because the FCC does not require site licensing for the bands used by SS systems, this technology has become the standard for high-speed RF data transmission. Two modulation schemes are commonly used to encode SS signals: direct sequence SS (DSSS) and frequency-hopping SS (FHSS).

FHSS uses a narrowband carrier that changes frequency in a pattern known to both transmitter and receiver. Properly synchronized, the net effect is to maintain a single logical channel. To an unintended receiver, FHSS appears to be a short-duration impulse noise.

DSSS generates a redundant bit pattern for each bit to be transmitted. This bit pattern is called a spreading code . The longer the code, the greater the probability that the original data can be recovered (and, of course, the more bandwidth will be required). To an unintended receiver DSSS appears as low-power wideband noise and is rejected by most narrowband receivers.

Source of Information : Elsevier Wireless Networking Complete
Read More
Posted in Wireless | No comments
Newer Posts Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • Windows 7 Shortcut Keys
    Longtime users of Windows have probably grown accustomed to navigating around the Windows user interface using the keyboard. This especially...
  • Windows on Mac: Virtualization Solutions
    If you’d prefer to join the ever-increasing ranks of Mac switchers—you traitor, you—you can still run Windows and, more important, Windows a...
  • Customizing Library Folders in Windows 7
    In your library folders, you can customize view options based on the contents. In the toolbar of the Pictures and Music library folders, Win...
  • Windows 7 BitLocker Drive Encryption
    In Windows Vista, you had the BitLocker Drive Encryption feature that allowed you to encrypt the content of entire volumes. In Windows 7, Mi...
  • Using Windows 7’s Performance Options
    While all the performance tools are available individually throughout the system, Windows 7 introduces a nice list of available tools, if yo...
  • Windows 7 - Installing and Configuring a Printer
    If your printer is already installed and operational at this point, you can skip this section and skim ahead for others that may be of inter...
  • Considering Centralized versus Group Sharing
    One of the most important preparation steps for your server is determining how to store the data you create. The two common methods are cent...
  • Berkeley Motes
    The Berkeley motes are a family of embedded sensor nodes sharing roughly the same architecture. Let us take the MICA mote as an example. The...
  • Using Windows 7 Ease of Access Tools
    If you have difficulty using a mouse or typing, have slightly impaired vision, or are deaf or hard of hearing, you can adjust the appearance...
  • Troubleshooting Boot and Startup Problems - Driver Loading in Safe Mode
    How does Windows know which device drivers and services are part of standard and networking-enabled safe mode? The answer lies in the HKLM\S...

Categories

  • Access 2010
  • BlackBerr
  • BlackBerry
  • Computer Science
  • Cyber Security
  • Exchange Server 2010
  • File Utilities
  • Foursquare
  • Google
  • Hardware
  • Internet
  • iPad
  • Linux
  • Lync Server
  • Microsoft Virtualization
  • Mobile Web
  • Networking
  • News
  • Security
  • Server Architectures
  • Smartphone
  • Ubuntu Linux
  • Windows
  • Windows 7
  • Windows Home Server
  • Windows Security
  • Windows Server 2008
  • Windows Server 2008 Hyper-V
  • Windows XP
  • Wireless

Blog Archive

  • ▼  2012 (66)
    • ▼  August (5)
      • Berkeley Motes
      • Sensor Node Hardware
      • Sensor Network Platforms and Tools
      • Ad Hoc Wireless Sensor Networks
      • MOBILE WEB CONSIDERATIONS
    • ►  July (9)
      • REVISITING ASSUMPTIONS
      • A NEW MEDIUM
      • Dawn of the Modern Mobile Web
      • Wireless Access Protocol
      • i - mode in Japan
      • Early Mobile Web Technologies
      • THE INEVITABILITY OF A MOBILE WEB
      • The TPC
      • Numbering the Operations Master (OM) Roles
    • ►  June (9)
      • Defining the Lightweight Directory Access Protocol...
      • Conceptualizing the AD DS Schema
      • Outlining Functional Levels in Windows Server 2008...
      • Examining the Key Features of Active Directory Dom...
      • Examining the Evolution of Directory Services
      • Performing a Live Migration
      • Quick Migration and Live Migration
      • Various RAID Levels
      • Common Internet File System
    • ►  May (8)
      • Parallel File Systems
      • IEEE 802.11n
      • IEEE 802.11b — High-Rate DSSS
      • Security of IEEE 802.11 Systems
      • WLAN Technologies
    • ►  April (9)
    • ►  March (9)
    • ►  February (8)
    • ►  January (9)
  • ►  2011 (85)
    • ►  December (5)
    • ►  November (6)
    • ►  October (7)
    • ►  September (8)
    • ►  August (9)
    • ►  July (9)
    • ►  June (7)
    • ►  May (7)
    • ►  April (4)
    • ►  March (6)
    • ►  February (8)
    • ►  January (9)
  • ►  2010 (230)
    • ►  December (13)
    • ►  November (3)
    • ►  October (2)
    • ►  September (10)
    • ►  August (31)
    • ►  July (32)
    • ►  June (23)
    • ►  May (2)
    • ►  April (21)
    • ►  March (32)
    • ►  February (28)
    • ►  January (33)
  • ►  2009 (119)
    • ►  December (33)
    • ►  November (31)
    • ►  October (35)
    • ►  September (20)
Powered by Blogger.

About Me

Unknown
View my complete profile