Sunday, July 30, 2006

A Way for Search Engines to Improve

Wouldn't it be nice if the search engines could comprehend our impressions of search results and adjust their databases accordingly? Properly optimized web pages would show up well in contextual searches and be rewarded with favorable reviews and listings. Pages which were spam or which had content that did not properly match the query would get negative responses and be pushed down in the search results.

Well, this reality is much closer than you might think.

To date, most webmasters and search engine marketers have ignored or overlooked the importance of traffic as part of a search engine algorithm, and thus, not taken it into consideration as part of their search engine optimization strategy. However, that might soon change as search engines explore new methods to improve their search result offerings. Teoma and Alexa already employ traffic as a factor in the presentation of their search results. Teoma incorporated the technology used by Direct Hit, the first engine to use click through tracking and stickiness measurement as part of their ranking algorithm. More about Alexa below.

How can Traffic be a Factor?

Click popularity sorting algorithms track how many users click on a link and stickiness measurement calculates how long they stay at a website. Properly used and combined, this data can make it possible for users, via passive feedback, to help search engines organize and present relevant search results.

Click popularity is calculated by measuring the number of clicks each web site receives from a search engine's results page. The theory is that the more often the search result is clicked, the more popular the web site must be. For many engines the click through calculation ends there. But for the search engines that have enabled toolbars, the possibilities are enormous.

Stickiness measurement is a really great idea in theory, the premise being that a user will click the first result, and either spend time reading a relevant web page, or will click on the back button, and look at the next result. The longer a user spends on each page, the more relevant it must be. This measurement does go a long way to fixing the problem with "spoofing" click popularity results. A great example of a search engine that uses this type of data in their algorithms is Alexa.

Alexa's algorithm is different from the other search engines. Their click popularity algorithm collects traffic pattern data from their own site, partner sites, and also from their own toolbar. Alexa combines three distinct concepts: link popularity, click popularity and click depth. Its directory ranks related links based on popularity, so if your web site is popular, your site will be well placed in Alexa.

The Alexa toolbar doesn't just allow searches, it also reports on people's Internet navigation patterns. It records where people who use the Alexa toolbar go. For example, their technology is able to build a profile of which web sites are popular in the context of which search topic, and display the results sorted according to overall popularity on the Internet.

For example a user clicks a link to a "financial planner", but the web site content is an "online casino". They curse for a moment, sigh, and click back to get back to the search results, and look at the next result; the web site gets a low score. The next result is on topic, and they read 4 or 5 pages of content. This pattern is clearly identifiable and used by Alexa to help them sort results by popularity. The theory is that the more page views a web page has, the more useful a resource it must be. For example, follow this link today -

http://www.alexa.com/data/details/traffic_details?q=&url=http://www.metamend.com/

- look at the traffic details chart, and then click the "Go to site now" button. Repeat the procedure again tomorrow and you should see a spike in user traffic. This shows how Alexa ranks a web site for a single day.

What Can I Do To Score Higher With Click Popularity Algorithms?

Since the scores that generate search engine rankings are based on numerous factors, there's no magic formula to improve your site's placement. It's a combination of things. Optimizing your content, structure and meta tags, and increasing keyword density won't directly change how your site performs in click-tracking systems, but optimizing them will help your web site's stickiness measurement by ensuring that the content is relevant to the search query. This relevance will help it move up the rankings and thus improve its click popularity score.

Search Engines Can Use the Click Through Strategy to Improve Results

Search engines need to keep an eye to new technologies and innovative techniques to improve the quality of their search results. Their business model is based on providing highly relevant results to a query quickly and efficiently. If they deliver inaccurate results too often, searchers will go elsewhere to find a more reliable information resource. The proper and carefully balanced application of usage data, such as that collected by Alexa, combined with a comprehensive ranking algorithm could be employed to improve the quality of search results for web searchers.

Such a ranking formula would certainly cause some waves within the search engine community and with good reason. It would turn existing search engine results on their head by demonstrating that search results need not be passive. Public feedback to previous search results could be factored into improving future search results.

Is any search engine employing such a ranking formula? The answer is yes. Exactseek recently announced it had implemented such a system, making it the first search engine to integrate direct customer feedback into its results. Exactseek still places an emphasis on content and quality of optimization, so a well optimized web site, which meets their guidelines will perform well. What this customer feedback system will do is validate the entire process, automatically letting the search engine know how well received a search result is. Popular results will get extended views, whereas unpopular results will be pushed down in ranking.

Exactseek has recently entered into a variety of technology alliances, including the creation of an Exactseek Meta Tag awarded solely to web sites that meet their quality of optimization standards. Cumulatively, their alliances combine to dramatically improve their search results.

ExactSeek's innovative approach to ranking search results could be the beginning of a trend among search engines to incorporate traffic data into their ranking algorithms. The searching public will likely have the last word, but webmasters and search engine marketers should take notice that the winds of change are once again blowing on the search engine playing field.

Did you find the information in this article useful? Feel free to pass it along to a friend or drop us a line at comments@metamend.com.

Thursday, July 27, 2006

Travel Your Way To More Traffic

I am not a professional photographer nor am I in the travel business. However, I stay very busy these days taking scenic photographs and featuring them on my web site because my “not so” professional travel photos are bringing serious traffic to my web pages.

I consider myself a serious hobbyist when it comes to photography. It’s a hobby because so far no one is willing to pay for any of my pictures. I know it’s a serious hobby because I am never totally honest with my wife when she wants to know just how much I spend on photography. Over the years I have been privileged to see and shoot a few of the spectacularly scenic locations that exist in the U.S.

When I started designing web pages I was constantly on the search for fresh images. One afternoon my search for a waterfall took me into the basement of my home where my wife had stored dozens of shoeboxes filled with hundreds of ordinary vacation snapshots. I found the perfect scene for my project among the many photographs I had taken of Niagara Falls.

I scanned several of the Niagara pictures into my computer and tweaked them slightly with an image editor. I was very pleased with the results and decided to post the images on my personal website. I made a mistake when I typed the words and the file ended up as “NiagraFalls”. I didn’t think that much about it because I only planned to publicize the Niagara Falls pages to a few friends and relatives so they could enjoy the pictures also.

A few weeks later I was checking the stats for my website and noticed that a number of guests had surfed in through search engines. To my surprise 30% of my visitors had come through a search for “niagrafalls”. I went to one of the major search engines and keyed in that phrase and was amazed to see that my site was in the top ten returns.

I decided to see if other scenic hotspots might become “virtual” destinations of choice. I went through the same process with photos I had taken at Garden of the Gods State Park in Colorado Springs Colorado as well as a few other Colorado locations. I was pleased that “gardenofthegods” became another key search phrase that brings people to my site.

In the past I used my personal web pages to promote my web design and hosting business. Unfortunately, web design and hosting is not that relevant to my guests who come because of scenic interests. After some measure of trial and error I discovered that “entertainment” products market very well to the scenic seekers that visit my site.

I began a banner rotation on my photographic pages that link out to various entertainment sites which I affiliate with. So far the results have been very encouraging. In a future article I hope to explain what I discovered about keyword searches in the entertainment sector.

Let me summarize with these simple instructions. Get those vacation photographs out and upload them to your website. Build a separate page for each exotic location that you have been to. Be sure to enter the name of the place in your page title and in the keywords and description meta-tags. Set up a relevant banner rotation program and enjoy the traffic of virtual travel.

Monday, July 24, 2006

Get Listed in Google Without Submitting Your Site

With Google delivering so much traffic, it is only normal to be eager to submit your page and have it indexed as soon as possible. However, submitting your page is not your only option, and it's not the best one. If this sounds strange keep reading.

Talking about its indexing process, Google says:

"We add thousands of new sites to our index each time we crawl the Web, but if you like, you may submit your URL as well. Submission is not necessary and does not guarantee inclusion in our index. Given the large number of sites submitting URLs, it's likely your pages will be found in an automatic crawl before they make it into our index through the URL submission form."

We can therefore draw two conclusions:

* 1. Submitting your site does not guarantee inclusion.
* 2. Most pages are found and indexed automatically, when Google crawls the web.

The Google folks have also made it clear that Google gives a page more importance when it is found through an automatic crawl. This can be easily verified when we consider how Google's PageRank system works: when page A links to page B, part of page A's PageRank trickles down to page B, increasing page B's PageRank (and, therefore, its importance). A manually submitted page will not enjoy this benefit.

Now that you know that manual submission is neither necessary nor the best way to go, what can you do to make Google find your pages?

The best way, at least in my personal experience, is to write an article on your area of expertise and submit it to popular article syndication sites like http://www.marketing-seek.com or http://www.ideamarketers.com . These sites will post your article, so that online publishers can use them for free in exchange for including your resource box at the end of the article. A resource box (a.k.a. bylines) is a small paragraph about yourself, written by you, which contains a link to your homepage.

In very little time, your article will show up in websites and ezines across the web. It will then be just a matter of time (usually days) before Google crawls those pages and finds your links. If you followed good web design practices and have included a link to a site map in your homepage, Google will follow it as soon as it finds your homepage, and all your pages will be indexed. It's as simple as that.

The most popular articles you can write are those that list a collection of tips related to your area of expertise. One of my most succesful articles is called "50 Surefire Web Design Tips", and it is nothing but a checklist of guidelines to follow when designing a website.

Another good way to help Google find your pages is to exchange links with other sites. Google will crawl those sites, find the links to your page, and add it to the index.

Finally, remember to optimize your pages before you try to get them listed, so that you have a better chance of ranking high in the search engine results pages (SERPs). After all, what good would it do to get your pages listed if nobody can find them?

Sunday, July 23, 2006

Finding Your Way Through Online Articles

For me, to decide to start a home business was easy. Learning how to do it wasn’t. With a plethora of resources and information, starting one seemed like a big headache. Well, when you have guidance from someone who has been there, you have an edge. All the struggles, false information, scams, and money spent on my online adventure will no doubt benefit you by not making the same mistakes I made.

My website is about caring for my subscribers. You’ll find out that by helping others succeed, you also succeed. So I’m here to mentor you into succeeding with your online business no matter what it may be.

When you subscribe to my weekly Business Tips Newsletter at www.madh4ttr.com,the information you receive gives you the right information at the right time to systematically build a profitable business online. It doesn’t matter what you sell, it will work for any business.

In addition to the weekly tips, I will be sending you a series of ebooks and reports about things like search engine secrets, powerlinking, traffic generation and much more. But first, we start with the basics in your first issue of the newsletter.

You are probably wondering why I am doing all this and giving away so much valuable information. Like I mentioned earlier (which you will soon find out), by helping you succeed in your profitable online business, it helps me in mine too. So naturally I want to do all I can to help you. If you need personal assistance, I can be emailed or phoned to answer any questions you may have.

In closing, I want to tell you it is work to run a business out of your home. The rewards are tremendous though. Working for yourself can invigorate you and give you a new look on life. Your limit to your income is only limited to how hard you are willing to work but as you will see; there is a lot of money to make on the internet

Wednesday, July 19, 2006

The Metaphors of the Net

The Metaphors of the Net
by: Sam Vaknin


I. The Genetic Blueprint

A decade after the invention of the World Wide Web, Tim Berners-Lee is promoting the "Semantic Web". The Internet hitherto is a repository of digital content. It has a rudimentary inventory system and very crude data location services. As a sad result, most of the content is invisible and inaccessible. Moreover, the Internet manipulates strings of symbols, not logical or semantic propositions. In other words, the Net compares values but does not know the meaning of the values it thus manipulates. It is unable to interpret strings, to infer new facts, to deduce, induce, derive, or otherwise comprehend what it is doing. In short, it does not understand language. Run an ambiguous term by any search engine and these shortcomings become painfully evident. This lack of understanding of the semantic foundations of its raw material (data, information) prevent applications and databases from sharing resources and feeding each other. The Internet is discrete, not continuous. It resembles an archipelago, with users hopping from island to island in a frantic search for relevancy.

Even visionaries like Berners-Lee do not contemplate an "intelligent Web". They are simply proposing to let users, content creators, and web developers assign descriptive meta-tags ("name of hotel") to fields, or to strings of symbols ("Hilton"). These meta-tags (arranged in semantic and relational "ontologies" - lists of metatags, their meanings and how they relate to each other) will be read by various applications and allow them to process the associated strings of symbols correctly (place the word "Hilton" in your address book under "hotels"). This will make information retrieval more efficient and reliable and the information retrieved is bound to be more relevant and amenable to higher level processing (statistics, the development of heuristic rules, etc.). The shift is from HTML (whose tags are concerned with visual appearances and content indexing) to languages such as the DARPA Agent Markup Language, OIL (Ontology Inference Layer or Ontology Interchange Language), or even XML (whose tags are concerned with content taxonomy, document structure, and semantics). This would bring the Internet closer to the classic library card catalogue.

Even in its current, pre-semantic, hyperlink-dependent, phase, the Internet brings to mind Richard Dawkins' seminal work "The Selfish Gene" (OUP, 1976). This would be doubly true for the Semantic Web.

Dawkins suggested to generalize the principle of natural selection to a law of the survival of the stable. "A stable thing is a collection of atoms which is permanent enough or common enough to deserve a name". He then proceeded to describe the emergence of "Replicators" - molecules which created copies of themselves. The Replicators that survived in the competition for scarce raw materials were characterized by high longevity, fecundity, and copying-fidelity. Replicators (now known as "genes") constructed "survival machines" (organisms) to shield them from the vagaries of an ever-harsher environment.

This is very reminiscent of the Internet. The "stable things" are HTML coded web pages. They are replicators - they create copies of themselves every time their "web address" (URL) is clicked. The HTML coding of a web page can be thought of as "genetic material". It contains all the information needed to reproduce the page. And, exactly as in nature, the higher the longevity, fecundity (measured in links to the web page from other web sites), and copying-fidelity of the HTML code - the higher its chances to survive (as a web page).

Replicator molecules (DNA) and replicator HTML have one thing in common - they are both packaged information. In the appropriate context (the right biochemical "soup" in the case of DNA, the right software application in the case of HTML code) - this information generates a "survival machine" (organism, or a web page).

The Semantic Web will only increase the longevity, fecundity, and copying-fidelity or the underlying code (in this case, OIL or XML instead of HTML). By facilitating many more interactions with many other web pages and databases - the underlying "replicator" code will ensure the "survival" of "its" web page (=its survival machine). In this analogy, the web page's "DNA" (its OIL or XML code) contains "single genes" (semantic meta-tags). The whole process of life is the unfolding of a kind of Semantic Web.

In a prophetic paragraph, Dawkins described the Internet:

"The first thing to grasp about a modern replicator is that it is highly gregarious. A survival machine is a vehicle containing not just one gene but many thousands. The manufacture of a body is a cooperative venture of such intricacy that it is almost impossible to disentangle the contribution of one gene from that of another. A given gene will have many different effects on quite different parts of the body. A given part of the body will be influenced by many genes and the effect of any one gene depends on interaction with many others...In terms of the analogy, any given page of the plans makes reference to many different parts of the building; and each page makes sense only in terms of cross-reference to numerous other pages."

What Dawkins neglected in his important work is the concept of the Network. People congregate in cities, mate, and reproduce, thus providing genes with new "survival machines". But Dawkins himself suggested that the new Replicator is the "meme" - an idea, belief, technique, technology, work of art, or bit of information. Memes use human brains as "survival machines" and they hop from brain to brain and across time and space ("communications") in the process of cultural (as distinct from biological) evolution. The Internet is a latter day meme-hopping playground. But, more importantly, it is a Network. Genes move from one container to another through a linear, serial, tedious process which involves prolonged periods of one on one gene shuffling ("sex") and gestation. Memes use networks. Their propagation is, therefore, parallel, fast, and all-pervasive. The Internet is a manifestation of the growing predominance of memes over genes. And the Semantic Web may be to the Internet what Artificial Intelligence is to classic computing. We may be on the threshold of a self-aware Web.

2. The Internet as a Chaotic Library

A. The Problem of Cataloguing

The Internet is an assortment of billions of pages which contain information. Some of them are visible and others are generated from hidden databases by users' requests ("Invisible Internet").

The Internet exhibits no discernible order, classification, or categorization. Amazingly, as opposed to "classical" libraries, no one has yet invented a (sorely needed) Internet cataloguing standard (remember Dewey?). Some sites indeed apply the Dewey Decimal System to their contents (Suite101). Others default to a directory structure (Open Directory, Yahoo!, Look Smart and others).

Had such a standard existed (an agreed upon numerical cataloguing method) - each site could have self-classified. Sites would have an interest to do so to increase their visibility. This, naturally, would have eliminated the need for today's clunky, incomplete and (highly) inefficient search engines.

Thus, a site whose number starts with 900 will be immediately identified as dealing with history and multiple classification will be encouraged to allow finer cross-sections to emerge. An example of such an emerging technology of "self classification" and "self-publication" (though limited to scholarly resources) is the "Academic Resource Channel" by Scindex.

Moreover, users will not be required to remember reams of numbers. Future browsers will be akin to catalogues, very much like the applications used in modern day libraries. Compare this utopia to the current dystopy. Users struggle with mounds of irrelevant material to finally reach a partial and disappointing destination. At the same time, there likely are web sites which exactly match the poor user's needs. Yet, what currently determines the chances of a happy encounter between user and content - are the whims of the specific search engine used and things like meta-tags, headlines, a fee paid, or the right opening sentences.

B. Screen vs. Page

The computer screen, because of physical limitations (size, the fact that it has to be scrolled) fails to effectively compete with the printed page. The latter is still the most ingenious medium yet invented for the storage and release of textual information. Granted: a computer screen is better at highlighting discrete units of information. So, these differing capacities draw the battle lines: structures (printed pages) versus units (screen), the continuous and easily reversible (print) versus the discrete (screen).

The solution lies in finding an efficient way to translate computer screens to printed matter. It is hard to believe, but no such thing exists. Computer screens are still hostile to off-line printing. In other words: if a user copies information from the Internet to his word processor (or vice versa, for that matter) - he ends up with a fragmented, garbage-filled and non-aesthetic document.

Very few site developers try to do something about it - even fewer succeed.

C. Dynamic vs. Static Interactions

One of the biggest mistakes of content suppliers is that they do not provide a "static-dynamic interaction".

Internet-based content can now easily interact with other media (e.g., CD-ROMs) and with non-PC platforms (PDA's, mobile phones).

Examples abound:

A CD-ROM shopping catalogue interacts with a Web site to allow the user to order a product. The catalogue could also be updated through the site (as is the practice with CD-ROM encyclopedias). The advantages of the CD-ROM are clear: very fast access time (dozens of times faster than the access to a Web site using a dial up connection) and a data storage capacity hundreds of times bigger than the average Web page.

Another example:

A PDA plug-in disposable chip containing hundreds of advertisements or a "yellow pages". The consumer selects the ad or entry that she wants to see and connects to the Internet to view a relevant video. She could then also have an interactive chat (or a conference) with a salesperson, receive information about the company, about the ad, about the advertising agency which created the ad - and so on.

CD-ROM based encyclopedias (such as the Britannica, or the Encarta) already contain hyperlinks which carry the user to sites selected by an Editorial Board.

Note

CD-ROMs are probably a doomed medium. Storage capacity continually increases exponentially and, within a year, desktops with 80 Gb hard disks will be a common sight. Moreover, the much heralded Network Computer - the stripped down version of the personal computer - will put at the disposal of the average user terabytes in storage capacity and the processing power of a supercomputer. What separates computer users from this utopia is the communication bandwidth. With the introduction of radio and satellite broadband services, DSL and ADSL, cable modems coupled with advanced compression standards - video (on demand), audio and data will be available speedily and plentifully.

The CD-ROM, on the other hand, is not mobile. It requires installation and the utilization of sophisticated hardware and software. This is no user friendly push technology. It is nerd-oriented. As a result, CD-ROMs are not an immediate medium. There is a long time lapse between the moment of purchase and the moment the user accesses the data. Compare this to a book or a magazine. Data in these oldest of media is instantly available to the user and they allow for easy and accurate "back" and "forward" functions.

Perhaps the biggest mistake of CD-ROM manufacturers has been their inability to offer an integrated hardware and software package. CD-ROMs are not compact. A Walkman is a compact hardware-cum-software package. It is easily transportable, it is thin, it contains numerous, user-friendly, sophisticated functions, it provides immediate access to data. So does the discman, or the MP3-man, or the new generation of e-books (e.g., E-Ink's). This cannot be said about the CD-ROM. By tying its future to the obsolete concept of stand-alone, expensive, inefficient and technologically unreliable personal computers - CD-ROMs have sentenced themselves to oblivion (with the possible exception of reference material).

D. Online Reference

A visit to the on-line Encyclopaedia Britannica demonstrates some of the tremendous, mind boggling possibilities of online reference - as well as some of the obstacles.

Each entry in this mammoth work of reference is hyperlinked to relevant Web sites. The sites are carefully screened. Links are available to data in various forms, including audio and video. Everything can be copied to the hard disk or to a R/W CD.

This is a new conception of a knowledge centre - not just a heap of material. The content is modular and continuously enriched. It can be linked to a voice Q&A centre. Queries by subscribers can be answered by e-mail, by fax, posted on the site, hard copies can be sent by post. This "Trivial Pursuit" or "homework" service could be very popular - there is considerable appetite for "Just in Time Information". The Library of Congress - together with a few other libraries - is in the process of making just such a service available to the public (CDRS - Collaborative Digital Reference Service).

E. Derivative Content

The Internet is an enormous reservoir of archives of freely accessible, or even public domain, information.

With a minimal investment, this information can be gathered into coherent, theme oriented, cheap compilations (on CD-ROMs, print, e-books or other media).

F. E-Publishing

The Internet is by far the world's largest publishing platform. It incorporates FAQs (Q&A's regarding almost every technical matter in the world), e-zines (electronic magazines), the electronic versions of print dailies and periodicals (in conjunction with on-line news and information services), reference material, e-books, monographs, articles, minutes of discussions ("threads"), conference proceedings, and much more besides.

The Internet represents major advantages to publishers. Consider the electronic version of a p-zine.

Publishing an e-zine promotes the sales of the printed edition, it helps sign on subscribers and it leads to the sale of advertising space. The electronic archive function (see next section) saves the need to file back issues, the physical space required to do so and the irritating search for data items.

The future trend is a combined subscription to both the electronic edition (mainly for the archival value and the ability to hyperlink to additional information) and to the print one (easier to browse the current issue). The Economist is already offering free access to its electronic archives as an inducement to its print subscribers.

The electronic daily presents other advantages:

It allows for immediate feedback and for flowing, almost real-time, communication between writers and readers. The electronic version, therefore, acquires a gyroscopic function: a navigation instrument, always indicating deviations from the "right" course. The content can be instantly updated and breaking news incorporated in older content.

Specialty hand held devices already allow for downloading and storage of vast quantities of data (up to 4000 print pages). The user gains access to libraries containing hundreds of texts, adapted to be downloaded, stored and read by the specific device. Again, a convergence of standards is to be expected in this field as well (the final contenders will probably be Adobe's PDF against Microsoft's MS-Reader).

Currently, e-books are dichotomously treated either as:

Continuation of print books (p-books) by other means, or as a whole new publishing universe.

Since p-books are a more convenient medium then e-books - they will prevail in any straightforward "medium replacement" or "medium displacement" battle.

In other words, if publishers will persist in the simple and straightforward conversion of p-books to e-books - then e-books are doomed. They are simply inferior and cannot offer the comfort, tactile delights, browseability and scanability of p-books.

But e-books - being digital - open up a vista of hitherto neglected possibilities. These will only be enhanced and enriched by the introduction of e-paper and e-ink. Among them:

* Hyperlinks within the e-book and without it - to web content, reference works, etc.;
* Embedded instant shopping and ordering links;
* Divergent, user-interactive, decision driven plotlines;
* Interaction with other e-books (using a wireless standard) - collaborative authoring or reading groups;
* Interaction with other e-books - gaming and community activities;
* Automatically or periodically updated content;
* Multimedia;
* Database, Favourites, Annotations, and History Maintenance (archival records of reading habits, shopping habits, interaction with other readers, plot related decisions and much more);
* Automatic and embedded audio conversion and translation capabilities;
* Full wireless piconetworking and scatternetworking capabilities.
* The technology is still not fully there. Wars rage in both the wireless and the e-book realms. Platforms compete. Standards clash. Gurus debate. But convergence is inevitable and with it the e-book of the future.

G. The Archive Function

The Internet is also the world's biggest cemetery: tens of thousands of deadbeat sites, still accessible - the "Ghost Sites" of this electronic frontier.

This, in a way, is collective memory. One of the Internet's main functions will be to preserve and transfer knowledge through time. It is called "memory" in biology - and "archive" in library science. The history of the Internet is being documented by search engines (Google) and specialized services (Alexa) alike.

3. The Internet as a Collective Nervous System

Drawing a comparison from the development of a human infant - the human race has just commenced to develop its neural system.

The Internet fulfils all the functions of the Nervous System in the body and is, both functionally and structurally, pretty similar. It is decentralized, redundant (each part can serve as functional backup in case of malfunction). It hosts information which is accessible through various paths, it contains a memory function, it is multimodal (multimedia - textual, visual, audio and animation).

I believe that the comparison is not superficial and that studying the functions of the brain (from infancy to adulthood) is likely to shed light on the future of the Net itself. The Net - exactly like the nervous system - provides pathways for the transport of goods and services - but also of memes and information, their processing, modeling, and integration.

A. The Collective Computer

Carrying the metaphor of "a collective brain" further, we would expect the processing of information to take place on the Internet, rather than inside the end-user’s hardware (the same way that information is processed in the brain, not in the eyes). Desktops will receive results and communicate with the Net to receive additional clarifications and instructions and to convey information gathered from their environment (mostly, from the user).

Put differently:

In future, servers will contain not only information (as they do today) - but also software applications. The user of an application will not be forced to buy it. He will not be driven into hardware-related expenditures to accommodate the ever growing size of applications. He will not find himself wasting his scarce memory and computing resources on passive storage. Instead, he will use a browser to call a central computer. This computer will contain the needed software, broken to its elements (=applets, small applications). Anytime the user wishes to use one of the functions of the application, he will siphon it off the central computer. When finished - he will "return" it. Processing speeds and response times will be such that the user will not feel at all that he is not interacting with his own software (the question of ownership will be very blurred). This technology is available and it provoked a heated debated about the future shape of the computing industry as a whole (desktops - really power packs - or network computers, a little more than dumb terminals). Access to online applications are already offered to corporate users by ASPs (Application Service Providers).

In the last few years, scientists have harnessed the combined power of online PC's to perform astounding feats of distributed parallel processing. Millions of PCs connected to the net co-process signals from outer space, meteorological data, and solve complex equations. This is a prime example of a collective brain in action.

B. The Intranet - a Logical Extension of the Collective Computer

LANs (Local Area Networks) are no longer a rarity in corporate offices. WANs (wide Area Networks) are used to connect geographically dispersed organs of the same legal entity (branches of a bank, daughter companies of a conglomerate, a sales force). Many LANs and WANs are going wireless.

The wireless intranet/extranet and LANs are the wave of the future. They will gradually eliminate their fixed line counterparts. The Internet offers equal, platform-independent, location-independent and time of day - independent access to corporate memory and nervous system. Sophisticated firewall security applications protect the privacy and confidentiality of the intranet from all but the most determined and savvy crackers.

The Intranet is an inter-organizational communication network, constructed on the platform of the Internet and it, therefore, enjoys all its advantages. The extranet is open to clients and suppliers as well.

The company's server can be accessed by anyone authorized, from anywhere, at any time (with local - rather than international - communication costs). The user can leave messages (internal e-mail or v-mail), access information - proprietary or public - from it, and participate in "virtual teamwork" (see next chapter).

The development of measures to safeguard server routed inter-organizational communication (firewalls) is the solution to one of two obstacles to the institutionalization of Intranets. The second problem is the limited bandwidth which does not permit the efficient transfer of audio (not to mention video).

It is difficult to conduct video conferencing through the Internet. Even the voices of discussants who use internet phones (IP telephony) come out (though very slightly) distorted.

All this did not prevent 95% of the Fortune 1000 from installing intranet. 82% of the rest intend to install one by the end of this year. Medium to big size American firms have 50-100 intranet terminals per every internet one.

One of the greatest advantages of the intranet is the ability to transfer documents between the various parts of an organization. Consider Visa: it pushed 2 million documents per day internally in 1996.

An organization equipped with an intranet can (while protected by firewalls) give its clients or suppliers access to non-classified correspondence, or inventory systems. Many B2B exchanges and industry-specific purchasing management systems are based on extranets.

C. The Transport of Information - Mail and Chat

The Internet (its e-mail function) is eroding traditional mail. 90% of customers with on-line access use e-mail from time to time and 60% work with it regularly. More than 2 billion messages traverse the internet daily.

E-mail applications are available as freeware and are included in all browsers. Thus, the Internet has completely assimilated what used to be a separate service, to the extent that many people make the mistake of thinking that e-mail is a feature of the Internet.

The internet will do to phone calls what it has done to mail. Already there are applications (Intel's, Vocaltec's, Net2Phone) which enable the user to conduct a phone conversation through his computer. The voice quality has improved. The discussants can cut into each others words, argue and listen to tonal nuances. Today, the parties (two or more) engaging in the conversation must possess the same software and the same (computer) hardware. In the very near future, computer-to-regular phone applications will eliminate this requirement. And, again, simultaneous multi-modality: the user can talk over the phone, see his party, send e-mail, receive messages and transfer documents - without obstructing the flow of the conversation.

The cost of transferring voice will become so negligible that free voice traffic is conceivable in 3-5 years. Data traffic will overtake voice traffic by a wide margin.

The next phase will probably involve virtual reality. Each of the parties will be represented by an "avatar", a 3-D figurine generated by the application (or the user's likeness mapped and superimposed on the the avatar). These figurines will be multi-dimensional: they will possess their own communication patterns, special habits, history, preferences - in short: their own "personality".

Thus, they will be able to maintain an "identity" and a consistent pattern of communication which they will develop over time.

Such a figure could host a site, accept, welcome and guide visitors, all the time bearing their preferences in its electronic "mind". It could narrate the news, like the digital anchor "Ananova" does. Visiting sites in the future is bound to be a much more pleasant affair.

D. The Transport of Value - E-cash

In 1996, four corporate giants (Visa, MasterCard, Netscape and Microsoft) agreed on a standard for effecting secure payments through the Internet: SET. Internet commerce is supposed to mushroom to $25 billion by 2003. Site owners will be able to collect rent from passing visitors - or fees for services provided within the site. Amazon instituted an honour system to collect donations from visitors. PayPal provides millions of users with cash substitutes. Gradually, the Internet will compete with central banks and banking systems in money creation and transfer.

E. The Transport of Interactions - The Virtual Organization

The Internet allows for simultaneous communication and the efficient transfer of multimedia (video included) files between an unlimited number of users. This opens up a vista of mind boggling opportunities which are the real core of the Internet revolution: the virtual collaborative ("Follow the Sun") modes.

Examples:

A group of musicians is able to compose music or play it - while spatially and temporally separated;

Advertising agencies are able to co-produce ad campaigns in a real time interaction;

Cinema and TV films are produced from disparate geographical spots through the teamwork of people who never meet, except through the Net.

These examples illustrate the concept of the "virtual community". Space and time will no longer hinder team collaboration, be it scientific, artistic, cultural, or an ad hoc arrangement for the provision of a service (a virtual law firm, or accounting office, or a virtual consultancy network). The intranet can also be thought of as a "virtual organization", or a "virtual business".

The virtual mall and the virtual catalogue are prime examples of spatial and temporal liberation.

In 1998, there were well over 300 active virtual malls on the Internet. In 2000, they were frequented by 46 million shoppers, who shopped in them for goods and services.

The virtual mall is an Internet "space" (pages) wherein "shops" are located. These shops offer their wares using visual, audio and textual means. The visitor passes through a virtual "gate" or storefront and examines the merchandise on offer, until he reaches a buying decision. Then he engages in a feedback process: he pays (with a credit card), buys the product, and waits for it to arrive by mail (or downloads it).

The manufacturers of digital products (intellectual property such as e-books or software) have begun selling their merchandise on-line, as file downloads. Yet, slow communications speeds, competing file formats and reader standards, and limited bandwidth - constrain the growth potential of this mode of sale. Once resolved - intellectual property will be sold directly from the Net, on-line. Until such time, the mediation of the Post Office is still required. As long as this is the state of the art, the virtual mall is nothing but a glorified computerized mail catalogue or Buying Channel, the only difference being the exceptionally varied inventory.

Websites which started as "specialty stores" are fast transforming themselves into multi-purpose virtual malls. Amazon.com, for instance, has bought into a virtual pharmacy and into other virtual businesses. It is now selling music, video, electronics and many other products. It started as a bookstore.

This contrasts with a much more creative idea: the virtual catalogue. It is a form of narrowcasting (as opposed to broadcasting): a surgically accurate targeting of potential consumer audiences. Each group of profiled consumers (no matter how small) is fitted with their own - digitally generated - catalogue. This is updated daily: the variety of wares on offer (adjusted to reflect inventory levels, consumer preferences, and goods in transit) - and prices (sales, discounts, package deals) change in real time. Amazon has incorporated many of these features on its web site. The user enters its web site and there delineates his consumption profile and his preferences. A customized catalogue is immediately generated for him including specific recommendations. The history of his purchases, preferences and responses to feedback questionnaires is accumulated in a database. This intellectual property may well be Amazon's main asset.

There is no technological obstacles to implementing this vision today - only administrative and legal (patent) ones. Big brick and mortar retail stores are not up to processing the flood of data expected to result. They also remain highly sceptical regarding the feasibility of the new medium. And privacy issues prevent data mining or the effective collection and usage of personal data (remember the case of Amazon's "Readers' Circles").

The virtual catalogue is a private case of a new internet off-shoot: the "smart (shopping) agents". These are AI applications with "long memories".

They draw detailed profiles of consumers and users and then suggest purchases and refer to the appropriate sites, catalogues, or virtual malls.

They also provide price comparisons and the new generation cannot be blocked or fooled by using differing product categories.

In the future, these agents will cover also brick and mortar retail chains and, in conjunction with wireless, location-specific services, issue a map of the branch or store closest to an address specified by the user (the default being his residence), or yielded by his GPS enabled wireless mobile or PDA. This technology can be seen in action in a few music sites on the web and is likely to be dominant with wireless internet appliances. The owner of an internet enabled (third generation) mobile phone is likely to be the target of geographically-specific marketing campaigns, ads and special offers pertaining to his current location (as reported by his GPS - satellite Geographic Positioning System).

F. The Transport of Information - Internet News

Internet news are advantaged. They are frequently and dynamically updated (unlike static print news) and are always accessible (similar to print news), immediate and fresh.

The future will witness a form of interactive news. A special "corner" in the news Web site will accommodate "breaking news" posted by members of the the public (or corporate press releases). This will provide readers with a glimpse into the making of the news, the raw material news are made of. The same technology will be applied to interactive TVs. Content will be downloaded from the internet and displayed as an overlay on the TV screen or in a box in it. The contents downloaded will be directly connected to the TV programming. Thus, the biography and track record of a football player will be displayed during a football match and the history of a country when it gets news coverage.

4. Terra Internetica - Internet, an Unknown Continent

Laymen and experts alike talk about "sites" and "advertising space". Yet, the Internet was never compared to a new continent whose surface is infinite.

The Internet has its own real estate developers and construction companies. The real life equivalents derive their profits from the scarcity of the resource that they exploit - the Internet counterparts derive their profits from the tenants (content producers and distributors, e-tailers, and others).

Entrepreneurs bought "Internet Space" (pages, domain names, portals) and leveraged their acquisition commercially by:

* Renting space out;
* Constructing infrastructure on their property and selling it;
* Providing an intelligent gateway, entry point (portal) to the rest of the internet;
* Selling advertising space which subsidizes the tenants (Yahoo!-Geocities, Tripod and others);
* Cybersquatting (purchasing specific domain names identical to brand names in the "real" world) and then selling the domain name to an interested party.
* Internet Space can be easily purchased or created. The investment is low and getting lower with the introduction of competition in the field of domain registration services and the increase in the number of top domains.

Then, infrastructure can be erected - for a shopping mall, for free home pages, for a portal, or for another purpose. It is precisely this infrastructure that the developer can later sell, lease, franchise, or rent out.

But this real estate bubble was the culmination of a long and tortuous process.

At the beginning, only members of the fringes and the avant-garde (inventors, risk assuming entrepreneurs, gamblers) invest in a new invention. No one knows to say what are the optimal uses of the invention (in other words, what is its future). Many - mostly members of the scientific and business elites - argue that there is no real need for the invention and that it substitutes a new and untried way for old and tried modes of doing the same things (so why assume the risk of investing in the unknown and the untried?).

Moreover, these criticisms are usually well-founded.

To start with, there is, indeed, no need for the new medium. A new medium invents itself - and the need for it. It also generates its own market to satisfy this newly found need.

Two prime examples of this self-recursive process are the personal computer and the compact disc.

When the PC was invented, its uses were completely unclear. Its performance was lacking, its abilities limited, it was unbearably user unfriendly. It suffered from faulty design, was absent any user comfort and ease of use and required considerable professional knowledge to operate. The worst part was that this knowledge was exclusive to the new invention (not portable). It reduced labour mobility and limited one's professional horizons. There were many gripes among workers assigned to tame the new beast. Managers regarded it at best as a nuisance.

The PC was thought of, at the beginning, as a sophisticated gaming machine, an electronic baby-sitter. It included a keyboard, so it was thought of in terms of a glorified typewriter or spreadsheet. It was used mainly as a word processor (and the outlay justified solely on these grounds). The spreadsheet was the first real PC application and it demonstrated the advantages inherent to this new machine (mainly flexibility and speed). Still, it was more of the same. A speedier sliding ruler. After all, said the unconvinced, what was the difference between this and a hand held calculator (some of them already had computing, memory and programming features)?

The PC was recognized as a medium only 30 years after it was invented with the introduction of multimedia software. All this time, the computer continued to spin off markets and secondary markets, needs and professional specialties. The talk as always was centred on how to improve on existing markets and solutions.

The Internet is the computer's first important application. Hitherto the computer was only quantitatively different to other computing or gaming devices. Multimedia and the Internet have made it qualitatively superior, sui generis, unique.

Part of the problem was that the Internet was invented, is maintained and is operated by computer professionals. For decades these people have been conditioned to think in Olympic terms: faster, stronger, higher - not in terms of the new, the unprecedented, or the non-existent. Engineers are trained to improve - seldom to invent. With few exceptions, its creators stumbled across the Internet - it invented itself despite them.

Computer professionals (hardware and software experts alike) - are linear thinkers. The Internet is non linear and modular.

It is still the age of hackers. There is still a lot to be done in improving technological prowess and powers. But their control of the contents is waning and they are being gradually replaced by communicators, creative people, advertising executives, psychologists, venture capitalists, and the totally unpredictable masses who flock to flaunt their home pages and graphomania.

These all are attuned to the user, his mental needs and his information and entertainment preferences.

The compact disc is a different tale. It was intentionally invented to improve upon an existing technology (basically, Edison’s Gramophone). Market-wise, this was a major gamble. The improvement was, at first, debatable (many said that the sound quality of the first generation of compact discs was inferior to that of its contemporaneous record players). Consumers had to be convinced to change both software and hardware and to dish out thousands of dollars just to listen to what the manufacturers claimed was more a authentically reproduced sound. A better argument was the longer life of the software (though when contrasted with the limited life expectancy of the consumer, some of the first sales pitches sounded absolutely morbid).

The computer suffered from unclear positioning. The compact disc was very clear as to its main functions - but had a rough time convincing the consumers that it was needed.

Every medium is first controlled by the technical people. Gutenberg was a printer - not a publisher. Yet, he is the world's most famous publisher. The technical cadre is joined by dubious or small-scale entrepreneurs and, together, they establish ventures with no clear vision, market-oriented thinking, or orderly plan of action. The legislator is also dumbfounded and does not grasp what is happening - thus, there is no legislation to regulate the use of the medium. Witness the initial confusion concerning copyrighted vs. licenced software, e-books, and the copyrights of ROM embedded software. Abuse or under-utilization of resources grow. The sale of radio frequencies to the first cellular phone operators in the West - a situation which repeats itself in Eastern and Central Europe nowadays - is an example.

But then more complex transactions - exactly as in real estate in "real life" - begin to emerge. The Internet is likely to converge with "real life". It is likely to be dominated by brick and mortar entities which are likely to import their business methods and management. As its eccentric past (the dot.com boom and the dot.bomb bust) recedes - a sustainable and profitable future awaits it.

About The Author

Sam Vaknin is the author of Malignant Self Love - Narcissism Revisited and After the Rain - How the West Lost the East. He is a columnist for Central Europe Review, PopMatters, and eBookWeb , a United Press International (UPI) Senior Business Correspondent, and the editor of mental health and Central East Europe categories in The Open Directory Bellaonline, and Suite101 .

Until recently, he served as the Economic Advisor to the Government of Macedonia.

Visit Sam's Web site at http://samvak.tripod.com
palma@unet.com.mk

This article was posted on October 12, 2003

Read More Articles from the "Computers and Internet" Category:

Microsoft Navision and Crystal Reports – An Overview
by Divine Rigor

Save $100 in 5 Minutes Backing Up Your Web Site?
by Robert Plank

Effective Web Search with Google’s “My Search History” Tool
by Nowshade Kabir

SAP Business One in Brazil and South America: Localization, Implementation - overview
by Andrew Karasev

Motivating Computer Service Company Operations Employees
by Joshua Feinberg

What Is A 404 Error Message?
by Ron Eletrick

Applying Physical Data Model in Entity-Relationship Analysis
by Santanu Ghosh

HostChart.com Top 7 Uses for a $5 per month Hosting Plan
by Rodney Ringler

Software Outsourcing
by Santanu Ghosh

Microsoft Great Plains Payroll module customization scenarios
by Andrew Karasev

<< Back to "Computers And Internet" Index

Monday, July 17, 2006

Protect Your Computer...and Your Business!

We all take the computer for granted. I mean, all we have to do is switch it on and it's ready to go. But did you ever stop to think what would happen if your computer suddenly crashed? And that is the only computer you have to work on!

What will happen to your work and your business for the next few days or weeks?

Do you have the original or a copy of all your programs?

Do you have the setup configurations, eg for your ISP?. You will need this to re-install some programs.

Do you have a copy of your email address book? Your email list or address book is vital to your business.

Do you remember all your passwords - for retrieving email, connecting to your ISP, membership sites, etc?

So what can you do to ensure that your computer will run as well as you'd expect, and continue working when your computer is down? Here's a few simple tips:

1.Is your computer protected from viruses? Install an anti-virus software such as Norton AntiVirus or McAfee, and make sure to get regular updates. New viruses are coming out more often these days so you need to have updates regularly. Anything more than 3 months old needs to be updated today.

You can do an online virus scan at: http://housecall.antivirus.com

2.Install a firewall. Anytime your computer is connected to the Internet without a firewall, it is operating under an "open door" policy to intruders. Hackers can get in, take what they want, and even leave open a "back door" so they can turn your computer into a "zombie" and use it to attack other computers, distribute porn and spam.

Your bank account and credit card information, passwords, documents and personal files can be stolen while you're busy surfing. Don't let that happen!

You can download a personal firewall from here:
http://www.zonelabs.com or from
http://soho.sygate.com/products/spf_standard.htm

3. Make regular backups of your important files. Keep a record of all vital information, such as passwords, system configurations, etc. in a file and also print a copy of this and keep in a safe place. Make a duplicate and keep at another location.

If your computer does not come with a zip drive or CDRW drive, it would be a good idea to invest in one. Zip drives and CDRW drives are inexpensive and can be easily installed. The cost of a blank CD for example is less than $1.00 and can store up to 650MB of data.

If you lost some files or your hard disk crashed you can easily retrieve them from the backups. And if your computer is down for repair, you can take that backup CD and work from another machine.

4.Remove all unwanted files on your hard disk. You can safely remove files in your temporary internet folder. In Internet Explorer, select Tools -> Internet options - > Delete Files.

5.Increase your systems performance by defragmenting your hard disk regularly. As applications and files are saved and deleted they gradually cause your hard disk to fragment. By defragmenting your hard disk you can optimise the performance of your computer. Defragmenting may also save wear and tear on your hard disk and extend its lifespan.

Do this today.

Saturday, July 15, 2006

Six Easy White-Listing Ways... Stop Losing Important Emails!

Are you dead sure about receiving of all the important emails that is sent to you?

"The chances are that you are among the 42% of the people who ARE NOT receiving the genuine emails and newsletters that you requested for".

Why this is so?

Increasingly, ISPs are using filtering systems to try and keep S/p/a/m out of customers' inboxes. Being automated, these filters are not perfect. Many authentic emails get caught in these filters.

Sometimes, they accidentally filter that "All-Important-Email" you were waiting for. And you have no way to know which of your emails is filtered. The end result is, you end up losing critical info that may prove to be vital to your business.

Is there a way to solve the problem?

Fortunately for all of us, there exist simple solutions. But the action has to come from you to make sure that these critical communications reaches your mailbox - Unblocked.

Six of the most common and easy solutions are given below. It's simple to implement.

#1. *The HOTMAIL User*: You can 'Safe List' an email ID in hotmail. Here's how:

1. Choose the 'Options' tab from the top
2. Select 'Safe List' ( Given under the head -Mail Handling-)
3. Now type the email address that you want messages to be received without filtering in the one line form.
4. Now choose 'Add'

#2 *The AOL User*: Place an email ID in the 'Address Book' in AOL. Here's how:

1. Go to Keyword Mail Controls
2. Select the screen name to which the newsletter is send (e.g. "HomeBiz Tip E-Mag")
3. Now choose 'Customize Mail Controls' For This Screen Name
4. For AOL v7.0, include in the section: "exclusion and inclusion parameters", the domains from which email is send. For e.g. @learnhomebusiness.com, @learnhomebusiness.net. For AOL v8.0, choose "Allow email from all AOL members, email addre^sses and domains"
5. Choose 'Next'.
6. Choose "Save" displayed at the bottom.

Important Note On AOL 9.0 : AOL 9.0 has become more complicated. The best way is to place an email ID to the "Person I know" buddy list. All mail you receive from this email ID will pass through the filters. So, make sure that when you join for a newsletter, it is ADDED to your buddy list.

#3 *The YAHOO User*: Correct the 'Bulk' Folder in your Yahoo. Here's how:

1. Newsletters gets mistakenly filtered to your Yahoo 'Bulk' folder. Go to your 'Bulk' folder; locate the filtered newsletter and choose, "this is not S^pam", next to the "From" field.

Also, to ensure that you do not miss on important emails to your Yahoo Inbox, do these steps:

1. Open your Yahoo mailbox
2. Choose 'Mail Options' (Given at the right corner)
3. Choose 'Filters'
4. Choose 'Add ' button.
5. Now, in the top row - From header: Choose 'contains'.
6. Type the domain from which the newsletter is sent. For e.g. learnhomebusiness.com
7. Finally, at the bottom -Move the message to: Choose Inbox.
8. Choose 'Add Filter' button.

#4. *For OTHER Users*: Meant for email programs like Outlook, Outlook Express, Eudora and Netscape Mail. Here's how:

Inform your ISP or the person responsible for your email that you want to receive all communications from a particular domain.

For e.g. a member of http://www.learnhomebusiness.com, asks them to white-list the eZine "HomeBiz Tips-EMag' so that he can continue to receive our Zero-cost products without a break.

#5. *For Own Filter Software*: Many times the filter software installed in the computer is the culprit. Here's how to prevent it.

Look for "Options" in the filter software that you have installed in your computer. Then give permissions for all emails from a particular email ID or domain.

#6. *Two Additional Tips To Prevent Loosing Important Emails*:

Tip #1 : You may be currently receiving all your email messages without a hitch. But, it's still advisable to white-list and prevent future problems.

Tip #2 : No matter what the email system you are currently using, add the email ID of your opt-in Newsletters' to the 'Address Book' of your particular email system.

Currently, white-listing is the ONLY way to ensure that you receive all your important emails. Do not ignore this important aspect of email communication.

From a personal angle, the major chunk of what I learned so far is from small, nifty newsletters that arrives in my mail box with a welcome smile. I can never block these little capsules of vital info.

If I do, I am blocking myself from the 'tit-bits' that adds to my knowledge. Ultimately, Integrating these 'tit-bits' into my website http://www.learnhomebusiness.com, keeps it live and current - Everyday.

Thursday, July 13, 2006

Email Communication

Gartner estimates that half of the 5.5 trillion emails sent in 2001 was business related.

Email has already taken over as the businesses’ main communication channel. What most people have failed to learn is, manners online is more important than basic social manners. In front of the monitor, your audience would not be able to judge you on your new Hugo Boss suit, your body scent, tone of your voice, nor the little gestures. Good language skills and proper email guidelines are important to ensure that your message gets across.

When drafting an email, take note of the few S:

* Speed
* Succinct
* Sell
* Suitable
* Subject, Salutations & Sincerely

Speed

Emails are delivered in matters of seconds. Where business communications are concerned, not checking your email at least once a day is to be frowned upon. The wide acceptance in email usage is contributed partly by it speed; do not get bogged down by heaps of emails. Surveys have shown that user do not expect an email to be replied after three working days.

Succinct

Omit needless words. Some people receive hundreds of emails a day. Chances are the recipient would skip the email after 2 seconds. Keeping the body of the email simple also avoid the chances of miscommunication by recipient second guessing the message.

Sell

Sell yourself, your idea, your product. Attempt to cross-sell, up-sell. Whatever the nature of your email, you will be able to slot in a witty sentence to sell. Businesses have been developed from a simple query like “I heard your company’s in charge of a new project.”

Suitable

Know the audience. Don’t send irrelevant message across. Using email, you are not able to receive instant response from the other party that you are able to talking face to face. You wouldn’t know if your ideas are well received till much later. Stay away from sensitive topics; you might never have the chance to explain the mistake you have made.

Subject, Salutations & Sincerely

The subject of the email should be meaningful. It helps prepare the reader for the content, and also makes it easier for the reader to search for the email later on. Open the email with “Dear xxx” if you do not know the recipient personally. You may prefer to go with “Hi xxx” if you want to sound friendly to a close contact. Closing emails with a simple “Regards, XXX” is nice, but not good enough. A good email should preferably close with your business card information. Include your full name, organization that you represent, and other contact methods if possible. Major companies spend millions of dollars on building a brand name, flash it.

------

Dear Reader,

Thanks for reading. I hope you like the article so far.

Besides the few main ‘S’ I have highlighted, good grammar is important too. Do not type using only caps or use exclamation marks excessively. Avoid Abbreviations unless they are commonly known.

With practice, recipients of your emails will have better impressions of you.

Watch out for the next issue of j-hunter newsletter!

Wednesday, July 12, 2006

The Seven Golden Rules Of Data Backups

Backups of company data are carried out for two main reasons. The first is to cater for those times when a document is inadvertently deleted or damaged and you wish to recover the original document; the second is as part of a disaster recovery plan in case something catastrophic happens to your computers (e.g., victims of a fire or theft).

Backups cost time, money and effort to implement, and they are of no value right up until the time you need them. This means they tend to be given a low priority, but ultimately they may easily represent the difference between your business surviving and failing. In this TipSheet, we look at the most common mistakes businesses make with backups.

1. Backup often

Re-entering data is tedious and frustrating. Backing up your company data once a week means that the most you should ever have to re-input is one week's worth Backing up your company data once a day means the most you should ever have to re-input is one day's worth. Frequent backups lessen the impact of data loss.

2. Don't keep any volatile data on desktop PCs

In many organisations, documents are kept on the hard drive of desktop PCs. It is unlikely that this is backed up regularly, if at all. A PC can easily be replaced: last week's quotations may not be so easy to replace. In particular, check that email is not stored on the local hard drive (this is very common in small to medium size businesses). All documents, spreadsheets, email, etc should be kept on a central server, which is in turn backed up regularly.

3. Automate the backup process

Backups are tedious to do. At 6:30pm, most people would prefer to set off home or join colleagues in the bar rather than stay in the office to find the correct tape and start a backup. Automating tedious tasks means they get done.

4. Monitor the backup process

While automating backups is a good idea, do check that they are running correctly. Make sure new files are being backed up; make sure the files of new users are being backed up. A quick check once a week could avert a much more serious problem later.

5. Keep backups offsite

If your business premises suffer a fire or flood, it is likely that backup media will be lost as well. Fireproof safes only protect media for a given time, typically one hour - if you use one, check the manufacturer's specification. If you always keep your backup tape in the server then when it is stolen the thief will probably throw the tape away. It's worth nothing to him, but it could represent bankruptcy to you.

6. Produce a "backup recovery" manual

A major disaster is not the time to try to remember how to recover data from your backup media. Have an idiot-proof, step-by-step procedure written - with a copy stored off-site - detailing how to reinstate your company data.

7. Test the recovery procedure periodically

Without warning, give the backup recovery manual to a member of staff and see how long it takes them to recover data. Many organisations never do this! No one involved with creating the manual or the backups themselves should be involved in the test. The results of the test should be analysed and the manual updated accordingly. A recovery test should be carried out at least twice a year. This proves both that the backups themselves are usable, and that your organisation understands how to use them if necessary.

Monday, July 10, 2006

An Easy way to Deal with Email Viruses and Worms

If you feel intimidated when someone tries to teach you something new on the computer, this article is for you!

In the course of my career, I’ve worked with many people who I knew were smart but were convinced that they couldn’t learn how to do new things on a computer. At some point, they’d convinced themselves that they weren’t one of those “computer people”. I would try to teach them how to do something that would make their work a lot easier or faster, and I could see them shut down immediately. “I can’t do stuff like that. I’m just not good at it.”

In a few cases, my colleagues were simply amazed that I knew how to do things like upload photos to the Internet or how to start a new folder in Windows. Some would tell me that I must have some special gift for technology. I would just laugh and tell them that nothing could be further from the truth! I have a degree in psychology. I’m not a math and science type of person, and if it weren’t for the patience of my tech-minded husband and friends, I never would have learned how to do these things.

The fact is, computers are such a part of our lives, that you can’t afford to think of yourself as not a “computer person”. The reason I think that a lot of people are intimidated when learning about computers is that so much of the instructions and directions are full of jargon and assume that people have as much technological knowledge as people who work with computers for a living.

I’m convinced that if I can do it, anyone else can do it too. All it takes is an open mind, confidence, and someone to explain things to you step by step in plain English.

---------------------

Aside from using anti-virus software, there is another way to keep some email viruses or worms from driving you crazy and clogging up your inbox. While the "Sobig" virus seems to have died down, there are sure to be some like it in the future. If you would like to prevent these bogus Emails from reaching your inbox, you can set up rules in Outlook or Outlook Express to send them directly to the delete folder.

Although the Sobig virus seems to be under control, it might be good practice to do this now. That way when the next big virus comes around, you'll be able to filter it out right away. It might seem like its difficult, but I know that anyone can do this. If you're smart enough to do your taxes and balance your checkbook, you can do this, trust me.

If you're using Outlook Express, go to Tools, then select "message rules" and then "mail". A box will pop up with buttons on the right side of the window. Hit the "new" button. Another box pops up with three windows. In the first box, click the box next to "Where the subject line contains specific words".

In the second box, click "delete it". Now in the third box it should say, "Apply this rule after the message arrives/Where the Subject line contains specific words/Delete it." Click on the words "contains specific words".

This is where you tell the program what words to look for in the subject line. It’s very important to remember that this is case sensitive, meaning that if you put "abc" in, it will only delete emails with "abc" in the subject, and not "ABC" or "Abc". For the Sobig virus, there were seven subject lines that were commonly used. If you would like to read more about this, go to http://www.webpro.com/iq/SobigF.asp The subject lines are:

* That movie
* Wicked screensaver
* Your application
* Approved
* My details
* Details
* Your details
* Thank you

It’s a good idea to copy and paste the phrases above to make sure the capitalization is exactly the same. (copy=control C, Paste=Control V)

Enter the first phrase into the box and then click on the "add" button to the right. That phrase will appear in the box below. You can add as many phrases as you'd like, clicking "add" after each one. When you're done, hit "ok" Then hit "ok" again.

At this point, we are back to one box open with buttons on the right. Be sure to click the "apply now" button if you want the rule to apply to the email that is already in your inbox as well as any future emails.

When you are done with that, click the "ok" button and you're finished.

If you use Outlook, the process is a little different.

First go to Tools, and then choose "Rules Wizard". Click on the "new" button on the right. It should say at the top of a new box "What type of rule would you like to create?" There will be a list of types of rules: you want to choose "Check messages when they arrive", which is at the top so it should already be highlighted. Simply click on "next".

The next box asks you "which conditions do you want to check?" with a list of choices, each with an empty box next to it. Scroll down until you get to "with specific words in the subject", and click the box in front of it.

Once you click it, you'll notice that "with specific words in the subject" appears in the box below. Click on the "specific words" in the lower box here to specify which words the program should look for.

This is where you tell the program what words to look for in the subject line. It’s very important to remember that this is case sensitive, meaning that if you put "abc" in, it will only delete emails with "abc" in the subject, and not "ABC" or "Abc".

For the Sobig virus, there were seven subject lines that were commonly used. If you would like to read more about this, go to http://www.webpro.com/iq/SobigF.asp The subject lines are:

* That movie
* Wicked screensaver
* Your application
* Approved
* My details
* Details
* Your details
* Thank you

A new box will pop up that says, "search text" at the top. Enter one of the phrases you want to filter out and click "add". You may enter as many phrases as you'd like, clicking "add" after each one. When you're finished, click "Ok". You'll be taken back to the previous box. Click "next" at the bottom.

At this point, you have two choices. You can either specify that these emails go into your delete file to be reviewed later, or you can specify that they be permanently deleted from Outlook so that you never see them. At this point, either click in the box next to "delete it", which simply moves it automatically to the delete folder, or click "permanently delete", which means that you will never see the email at all and won't be able to get it back.

Click "next" again and you're now at the exceptions box. I can see no reason to use the exceptions when dealing with the Sobig viruses and others like it. There might be a temptation to make an exception for people who are in your address book or close friends. But remember, a virus will take over someone else's address book and send you emails without the person ever knowing. Anyone in your address book could send you an infected email without knowing it. I recommend that you hit “next” without selecting any exceptions at this point.

In the next box, the program would like to know the name of the rule you've just created. You might want to call it "viruses #1" or something similar. Click finish after naming your rule. At this point you have another choice: you can apply the rule you just created to the mail already in your inbox, or you can choose to have it apply only to the incoming mail from now on. Choose either "run now" or "ok".

You're finished. That wasn't SO hard, was it? You might even want to set up some more rules to help you organize your inbox or to filter out spam or unwanted email.

-------------------

Some more information about attachments and viruses/worms:

Email viruses and worms almost always are transmitted through attachments. Remember after the Anthrax scare in the US a couple of years ago when everyone was very picky about what mail they accepted and opened? Anything that looked suspicious or didn't have a return address wasn't opened.

Think about attachments in the same way. If you get email from someone you don't know, don't open the attachment! If the email doesn't say anything personal to you or use your real name, don't open the attachment. You can always send an email back to that person asking them about who they are or what the attachment is for if you're in doubt.

What you need to know about the difference between spam and viruses:

Recently online I've seen a couple of people referring to the emails they get from viruses as spam. If you want to impress your friends and coworkers with your technological savvy, you need to know that spam is unwanted and unsolicited email you get for a commercial purpose. The intention of the email is commercial. Someone wants you to buy something, be a part of their program or visit their website.

Email you get because of viruses is technically not spam. Although it is unwanted, its intention is not to advertise or market anything, it’s simply a nuisance created by someone with too much time on his or her hands!

Also keep in mind that viruses commonly get into people's address books and send out automatic emails to everyone on the list. Your friends and relatives are not sending you infected email on purpose.

Sunday, July 9, 2006

PHP in the Command Line

PHP in the Command Line
by: Robert Plank


There's a single line you can add to your web host's control panel that will automatically archive your content.

LISTEN CLOSELY AND YOU'LL HEAR THE OCEAN

Ever run commands in DOS? You've used a shell. A "shell" in the computer world is a place where you enter commands and run files by name rather than clicking around different windows.

Most web hosts let you operate a shell remotely. This means that you can type commands in window on your computer, that are actually run on your web host, thousands of miles away.

I'd like you to log in to your shell now. If you can't do it by going in to DOS and typing "telnet your.domain.here", your web host probably uses "SSH" -- a secure shell. You'll have to ask your host how you can log in to the shell, they might tell you to download a program called "PuTTY" and give instructions how to use it.

If you can't login to your shell, or aren't allowed, you'll just have to sit back and watch what I do.

Now that you're logged in, type: echo hi

On the next line will be printed hi

Try this: date +%Y

This prints the current year. That's 2004 for me.

So what if we combined the two? Try: echo date +%Y

Well, that doesn't work, because the computer thinks you're trying to echo the TEXT "date +%Y" instead of the actual COMMAND. What we have to do here is surround that text in what are called "back quotes". Unix will evaluate everything enclosed in back quotes (by evaluate, I mean it'll treat that text as if it were entered as a command.)

Your back quotes key should be located on the upper-left corner of your keyboard, under the Esc button.

PIPE DOWN, OVER THERE...

Type this in: echo `date +%Y`

Gives us "2004". You could even do something like this: echo `dir`

Which puts the directory listing all on one line.

But now, we put our newfound knowledge to good use. Unix has another neat feature called piping, which means "take everything you would normally output to the screen here, and shove it whatever file I tell you to." So say I had something like this:

echo "hey" > test.txt

Now type "dir" and you'll see a new file, test.txt, that wasn't there before. View it off the web, or FTP it to your computer, do whatever you have to, to read the file. It should contain the word "hey".

Likewise, dir > test.txt would store the directory listing into "test.txt".

HERE TODAY, GONE TOMORROW

But say we wanted that text file to be named according to the current date. You already have the pieces to figure all that out, if you think about it. Type: date --help to get a listing of all the possible ways to represent the date. The ones you want to represent the year, month and day are %Y, %m, and %d (capitalization *is* important here).

This is what you want: echo `date +%Y%m%d.html`

Running this today, January 8th, 2004, results in: 20040108.html

I've just echoed this year, followed by this month and this day, with an ".html" at the end. This will be our output file.

Now, to pipe it: echo "hey" > `date +%Y%m%d.html`

If this sort of thing were to run every day, it would save "hey" to a file called 20040108.html today, and tomorrow to a file called 20040109.html, then 20040110.html, and so on.

The easy part now, is figuring out what you want archived. I use wget, which takes an option to store the output file, so we don't need to use piping. Here's an example of how to use wget to save the page "http://www.google.com" to a file representing today's date:

wget http://www.google.com --output-document=`date +%Y%m%d.html`

PUT IT TOGETHER

And now, to setup your crontab. I won't explain how crontabs work, just that they're the equivalent of the Windows Task Scheduler, which automatically run a particular command at a given date and time. The following will save http://www.google.com to a different filename every day.

0 0 * * * wget http://www.google.com --output-document=`date +%Y%m%d.html` > /dev/null

Keep in mind that if you want to put it in a special directory, just put the path in, i.e. change what's in the "output document" parameter to: `date +/home/user/wwwroot/your.host/%Y%m%d.html`

I've piped the output to /dev/null because wget saves the file for us, and there's no reason to do anything else with the output.

Tip: Pipe your cron jobs to /dev/null if you aren't doing anything with the output, because some hosts e-mail you the results and no one needs an extra piece of useless e-mail every day.

Just change http://www.google.com to the page of your choice. However it's important to know that the "archive" you're taking will only be a snapshot of that page on a particular day.

What I mean by that is, if you're archiving a blog page every day, this archiver won't archive that page on a particular day, it'll just be archiving what was there at that time. So it's not useful for everything, but it's good if you have access to a page that changes constantly, once a day, whose results you'd like to store.

Add that line above into your crontab file. These days every host has a control panel so there should be a place in there to add cron jobs. If you'd like the archiver to run at a time other than midnight, or if it should run weekly, monthly, or whatever, try this tool I've made for you:

http://www.robertplank.com/cron

I've designed it the same way Task Scheduler is setup, you can enter a certain time, run only on weekdays, run only on certain days of the week. Anything you want.

This tip doesn't take care of everything... for example, wget won't save the images on a page unless they're referenced by full URLs. In the next installment of this article series I'll be showing you how you can use PHP to make up for some of the things wget can't do (like grabbing images).

Here's my solution: http://www.jumpx.com/tutorials/commandline/get.zip

It's not the most perfect script in the world, but it should do what you want most of the time. If you'd like to delve into what it does, I've added comments within so you can see what it does. I've commented all the functions and a few of the important parts of the code.

ARGUMENTS (NOT THE SHOUTING KIND)

But wait, you want to use it in a crontab, which is run from the command line. You can't just do something like:

php get.php?url=http://www.google.com

Because it'll try looking for a *file* named all that, complete with the question mark and all. So what if you have ten different URLs to grab off ten different crontabs, but you only want one script.

How would you do all that? It's a long brutal ordeal so prepare yourself. Ready?

php get.php url=http://www.google.com

Yeah, that's all there is to it. PHP's pretty cool like that, it takes the arguments after the file name and stores them in the same array you'd check anyway.

One thing you might notice is that every time you run PHP from the command line, it gives you something like this:

Content-type: text/html

X-Powered-By: PHP/4.3.3

your output here...

Those first couple of lines are the HTTP headers. But we're not using HTTP (not loading it from a browser), so in the command line it's better to call php with the "-q" option, like this:

php -q get.php url=http://www.google.com

The "q" stands for quiet, and will refrain from giving you the HTTP headers. If you're just piping the script to /dev/null (to nothing) in a crontab, it doesn't really make a difference but you should try to make this a habit when running PHP from the command line.

That's enough for you to at least get started. If you still feel liking poking about with the things PHP can do in the command line, you can try prompting a user for keyboard input, like this:

Remember, that only works when PHP is run from the shell.

If you have PHP installed in Windows on a local machine of yours, you can also see what happens when you try to read (and write) to filehandles like "COM1:" and "LPT1:" ... yep, you guessed it, the serial port and printer port. If PHP isn't installed on the computer you're using now then don't bother. But it is possible to use PHP to print and interact with your peripherals as well.

You're welcome.

About The Author

Robert Plank is the creator of Lightning Track, Redirect Pro, Rotatorblaze, and others.

An easy way to display the content saved by this article's script is explained in chapters 15 and 16 of his book, "Simple PHP": http://www.simplephp.com

You may reprint this article in full in your newsletter or web site.

This article was posted on January 12, 2004

Read More Articles from the "Computers and Internet" Category:

Misspelling On Ebay – Use It To Your Advantage
by Thomas Haselhorst

Windows XP: Use the Security Center to View Virus Protection
by John Chapman

World On IP Community versus Telecoms' Monopoly
by Patrizia Demaria

Cisco CCNA Exam Tutorial: A Guide To RAM, ROM, NVRAM, And Flash
by Chris Bryant, CCIE #12933

Say 'Hello' To Fiber Optic Cables And Goodbye To Copper Cables
by James Croydon

How To Create A Great Looking Blog That Will Earn You More Cash
by Maggie Wallace

Sirius Satellite Radio vs XM Satellite Radio – Which Streams Should You Choose?
by C. J. Gustafson

Link Popularity Reports
by Henry James

Accept Credit Cards Online without a Merchant Account
by John Lynch

Buying A New Computer 101: The Basics
by Ian Bunk

<< Back to "Computers And Internet" Index

Saturday, July 8, 2006

The Top Seven Strategies for Website Success

Whether you're concerned with business-to-business, or business to consumer, whether your organization is large or small, commercial or nonprofit, these are some fundamental questions around your Website and technology strategy that should be addressed.

Otherwise, you risk missing opportunities, and not maximizing the return on your investment in your online presence.

If you haven't visited your own Website for a while, look at it again in light of these questions:

1. Does your Website present an appropriate image of your company?

Marketers talk a lot about branding, and consistency of message. Does your company site reflect how you'd like your customers to feel about your business? Is it sophisticated, and professional looking? Does it speak directly to visitors in language that they'll understand, and in ways that relate to their issues and needs?

Image is also about public relations. Publicity is a powerful marketing tool, and reporters are increasingly looking for stories and information online. Does your Website offer a media center? Does it offer comment on current events in your industry? Do you face up to the bad news, and spin it to your advantage? Whatever you may think of Microsoft, check out their extensive Press Pass area at: http://www.microsoft.com/presspass/default.asp

2. Does your Website suggest potential for new or currently untapped markets?

In almost all the sites that I've consulted for, we've identified markets or audiences beyond the "real-world" customer base of the business.

This may be because the site extends the geographic reach of your marketing. If you have good content on your site, it may also be because visitors looking for your subject area find you in search engines, and come to read your articles and white papers.

Either way, if you find many "non-traditional" visitors to your site, you should assess whether they constitute a possible new market area for your business.

3. Does your Website suggest potential for new products or services?

A clear understanding of your visitor needs may also encourage you to consider new products or services. On the Web, bundling expertise into downloadable, for-sale content provides valuable new revenue streams for many businesses and non-profits.

You can find great clues for development ideas by tracking the keywords entered into your own site search engine. These show what visitors expect to find on your site - and therefore what they expect your company to offer.

4. Does your Website provide continuing added value for existing customers?

Most site owners focus on acquiring new customers, and fail to maximize the opportunities to support and service existing ones.

These include password-protected areas where your clients can follow the progress of their projects, share documents with you, etc. Personalization and pre-populated forms (i.e. which are automatically filled in with the customer's details) help to create a feeling of value, and save time for your visitors.

Check the average response time for a contact from your Website. One of the top complaints about major company sites is that e-mails are not answered in a timely (hopefully 24 hours or less) manner.

5. Does your Website support your internal operations and employee needs?

This question relates to whether you're making the best use of all available technologies, and integrating them with your online operations.

Example applications to consider include:

* Instant messaging, fast becoming a serious business tool
* Knowledge bases - continually updated databases that can provide automated customer support on a 24/7 basis
* Streaming media, perhaps for just-in-time training or on-the-spot manuals for your operatives
* Intranets and extranets, which are really just fancy names for password-protected employee and client areas

6. Does your Website integrate fully with your "real-world" activities and processes?

One of the most frustrating visitor experiences is to complete a form, an application, or to submit a search on your Website, only to receive an error message.

Customers want the security of an e-mailed purchase confirmation. They want to know that they'll be taken off your mailing list quickly and without the need for multiple requests.

With the complexity of technology and programs today, sometimes a change to a seemingly unrelated system can wreak havoc. Do you regularly check all the input forms and processes on your site to ensure that no unexpected gremlins have crept in?

7. Does your Website provide you with a justifiable return on investment?

This is probably the most important question of the seven, and possibly also the most difficult.

That's because the answer depends on a clear understanding of the goals of your site, both in direct financial terms, and in other less tangible benefits, such as name recognition.

The keys to evaluating ROI, to improving your site, and often to further business development ideas can be found in your traffic reports. These show what visitors are looking for, how long they spend on the site, where they go, where they leave, and what rate of response you get to the various calls to action.

These reports can be daunting - a mass of figures, graphs and URL's. But I'd strongly suggest that someone in your organization should understand them. Otherwise, you're shooting in the dark with your Web investment.

Friday, July 7, 2006

Computer Traumas

It has happened! Computer games have started to control my life on and off the screen. No complicated games like Age of Empires, just the simple one of Tetris. You know the one, where different shaped and colored bricks fall out of the sky and you have to arrange them in nice lines at the bottom? Hopefully with the end result of all colors matching in straight lines so that they can be removed and point gained.

Crazy really, it first happened many years ago when I had this stupid bet that I could get more points than the next guy. What that really means is that, "I am going to be up all night playing this game and will be totally incapable of staying awake in the office tomorrow, unless of course I play the game in the office as well". That’s what computer games do to us. We become machines where food and sleep are secondary items to all else. Just keep on playing.............till you drop.

I managed to get through that episode with only a slight increase in my weight and a damaged back from not having moved anything else except my two fingers for a sustained period of time. The latest episode though has created havoc with my life in more ways than one and I am getting seriously worried about it.

I had been playing that game in the evening for around three hours and had then gone to bed early for a dreamless and normal sleep. All okay and expected you say? Well, the sleep was but when I drove to the office the next day things started to happen that rapidly woke me up to the danger that I was in. There I was in my blue car approaching the traffic lights when all of a sudden I swerved into the other lane thus ending up stopped neatly behind this other blue car. Behind me, confused and irritated drivers with green and red cars tooted their horns angrily wandering what this maniac was doing. But I? I was happy in that I had managed to get the colors arranged and all I needed was another blue car and then we could have a full line................oh, no, what is happening to me? I sat there for a while shivering as it dawned on me that I had entered the game itself, it had taken me over.........I was a brick!

Yeah, and that was not all. I found myself one afternoon staring inanely at a house wall and following the line of bricks along trying to sort out in my mind which pattern was best and which was not. And at my desk I found that I had arranged all files and papers in a neat pattern according to color and size having totally disregarded any format associated with the importance of in-going, outgoing, urgency, etc. Extremely worrying to say the least!

I have withdrawn from playing Tetris and other games of that sort hoping that I will stop having these off the screen episodes in real life. In the hope that I can return to a normal existence without having off-the-screen battles. Do other people suffer from this or is it just me?

The other game that I played to have a break from Tetris was "Prairie Dog". One of those annoying games where you have a choice of guns and dogs keep on appearing on the screen. Aim and fire being the next step. Bang, Bang, Bang, another dog bites the dust. Yes, I know, pathetic really, but great fun. Volume up full, there I would be furiously firing at any movement, reloading and starting again and the dogs would make a strangled sound as I hit them. But once again I one day realized that all was not well with me, as I used to sit on my balcony and take imaginary potshots at cars as they appeared on the road. Or in a busy street I would say "bang, bang" and pretend that I had cleared a path for myself through the crowds.

I played that other game Age of Empires many times to. Love that game as it takes s kill and thought as well as two fingers and rapid movement and I became extremely proficient at it as time went by. My computer often struggled to cope with the size of my army and the enemies that I faced. I would sit there for hours on end, maneuvering, shifting, attacking and withdrawing till the sun started to come up on the horizon. It would be then that I would force myself away and climb into bed only to resurface two hours later, make a large urn of coffee and re-attack with a vengeance. Although this game never caused me to start charging at other cars on the highways or lobbing screwed up notes at others in the office it did cause me to take a good look at myself. What would happen if suddenly I started to do this sort of thing in real life? If I started to make deals with my neighbors to attack next door offices or ping elastic bands at the mail delivery boy? I’ve stopped playing games now and have become a serious and boring "been there, done that and cured myself" type of person. I do have long and empty hours where I feel the urge to take up where I left off and I get extremely jealous when others talk about games or I see others playing them but I resist. I think it must be like smoking where one never loses the urge to light up and take a draw – just the one! No, No, I cannot! I now sit there and lecture others on the dangers of playing games and that they should stop before it is too late. And they? They just nod politely and then disappear to talk amongst themselves............."must be and ex-player", whisper, whisper, whisper.