Diberdayakan oleh Blogger.
RSS

SOCIAL MEDIA

Social media

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Social media are media for social interaction, using highly accessible and scalable publishing techniques. Social media uses web-based technologies to turn communication into interactive dialogues. Andreas Kaplan and Michael Haenlein also define social media as "a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, which allows the creation and exchange of user-generated content."[1] Businesses also refer to social media as consumer-generated media (CGM). A common thread running through all definitions of social media is a blending of technology and social interaction for the co-creation of value.

Distinction from industrial media

People gain information, education, news, etc., by electronic media and print media. Social media are distinct from industrial or traditional media, such as newspapers, television, and film. They are relatively inexpensive and accessible to enable anyone (even private individuals) to publish or access information, compared to industrial media, which generally require significant resources to publish information.
One characteristic shared by both social media and industrial media is the capability to reach small or large audiences; for example, either a blog post or a television show may reach zero people or millions of people. The properties that help describe the differences between social media and industrial media depend on the study. Some of these properties are:
  1. Reach - both industrial and social media technologies provide scale and enable anyone to reach a global audience.
  2. Accessibility - the means of production for industrial media are typically owned privately or by government; social media tools are generally available to anyone at little or no cost.
  3. Usability - industrial media production typically requires specialized skills and training. Most social media does not, or in some cases reinvent skills, so anyone can operate the means of production.
  4. Recency - the time lag between communications produced by industrial media can be long (days, weeks, or even months) compared to social media (which can be capable of virtually instantaneous responses; only the participants determine any delay in response). As industrial media are currently adopting social media tools, this feature may well not be distinctive anymore in some time.
  5. Permanence - industrial media, once created, cannot be altered (once a magazine article is printed and distributed changes cannot be made to that same article) whereas social media can be altered almost instantaneously by comments or editing.
Community media constitute an interesting hybrid of industrial and social media. Though community-owned, some community radios, TV and newspapers are run by professionals and some by amateurs. They use both social and industrial media frameworks.
In his 2006 book, The Wealth of Networks: How Social Production Transforms Markets and Freedom, Yochai Benkler analyzed many of these distinctions and their implications in terms of both economics and political liberty. However, Benkler, like many academics, uses the neologism network economy or "network information economy" to describe the underlying economic, social, and technological characteristics of what has come to be known as "social media".
Andrew Keen criticizes social media in his book The Cult of the Amateur, writing, "Out of this anarchy, it suddenly became clear that what was governing the infinite monkeys now inputting away on the Internet was the law of digital Darwinism, the survival of the loudest and most opinionated. Under these rules, the only way to intellectually prevail is by infinite filibustering."[2]
Tim Berners-Lee contends that the danger of social networking sites is that most are silos and do not allow users to port data from one site to another. He also cautions against social networks that grow too big and become a monopoly as this tends to limit innovation[3].
There are various statistics that account for social media usage and effectiveness for individuals worldwide. Some of the most recent statistics are as follows:
  • Social networking now accounts for 22% of all time spent online in the US.[4]
  • A total of 234 million people age 13 and older in the U.S. used mobile devices in December 2009.[5]
  • Twitter processed more than one billion tweets in December 2009 and averages almost 40 million tweets per day.[6]
  • Over 25% of U.S. internet page views occurred at one of the top social networking sites in December 2009, up from 13.8% a year before.[7]
  • Australia has some of the highest social media usage statistics in the world. In terms of Facebook use Australia ranks highest with almost 9 hours per month from over 9 million users.[8][9]

Social media, Marketing, and "social authority"

One of the key components in successful social media marketing implementation is building "social authority". Social authority is developed when an individual or organization establishes themselves as an "expert" in their given field or area, thereby becoming an "influencer" in that field or area. [10]
It is through this process of "building social authority" that social media becomes effective. That is why one of the foundational concepts in social media has become that you cannot completely control your message through social media but rather you can simply begin to participate in the "conversation" in the hopes that you can become a relevant influence in that conversation. [11]
However, this conversation participation must be cleverly executed because while people are resistant to marketing in general, they are even more resistant to direct or overt marketing through social media platforms. This may seem counter-intuitive but is the main reason building social authority with credibility is so important. A marketer can generally not expect people to be receptive to a marketing message in and of itself. In the Edleman Trust Barometer report in 2008, the majority (58%) of the respondents reported they most trusted company or product information coming from "people like me" inferred to be information from someone they trusted. In the 2010 Trust Report, the majority switched to 64% preferring their information from industry experts and academics. According to Inc. Technology's Brent Leary, "This loss of trust, and the accompanying turn towards experts and authorities, seems to be coinciding with the rise of social media and networks."[12][13]
Thus, using social media as a form of marketing has taken on whole new challenges. As the 2010 Trust Study indicates, it is most effective if marketing efforts through social media revolve around the genuine building of authority. Someone performing a "marketing" role within a company must honestly convince people of their genuine intentions, knowledge, and expertise in a specific area or industry through providing valuable and accurate information on an ongoing basis without a marketing angle overtly associated. If this can be done, trust with, and of, the recipient of that information – and that message itself – begins to develop naturally. This person or organization becomes a thought leader and value provider - setting themselves up as a trusted "advisor" instead of marketer. "Top of mind awareness" develops and the consumer naturally begins to gravitate to the products and/or offerings of the authority/influencer. [14][15]
Of course, there are many ways authority can be created – and influence can be accomplished – including: participation in Wikipedia which actually verifies user-generated content and information more than most people may realize; providing valuable content through social networks on platforms such as Facebook and Twitter; article writing and distribution through sites such as Ezine Articles and Scribd; and providing fact-based answers on "social question and answer sites" such as EHow and Yahoo! Answers.
As a result of social media – and the direct or indirect influence of social media marketers – today, consumers are as likely – or more likely – to make buying decisions based on what they read and see in platforms we call "social" but only if presented by someone they have come to trust. That is why a purposeful and carefully designed social media strategy has become an integral part of any complete and directed marketing plan but must also be designed using newer "authority building" techniques.[16]

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

WEBSITE

Website

From Wikipedia, the free encyclopedia
Jump to: navigation, search
A website (also spelled Web site[1][2]) is a collection of related web pages, images, videos or other digital assets that are addressed relative to a common Uniform Resource Locator (URL), often consisting of only the domain name (or, in rare cases, the IP address) and the root path ('/') in an Internet Protocol-based network. A web site is hosted on at least one web server, accessible via a network such as the Internet or a private local area network.
A web page is a document, typically written in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). A web page may incorporate elements from other websites with suitable markup anchors.
Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user of the web page content. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.
All publicly accessible websites collectively constitute the World Wide Web.
The pages of a website can usually be accessed from a simple Uniform Resource Locator (URL) called the homepage. The URLs of the pages organize them into a hierarchy, although hyperlinking between them conveys the reader's perceived site structure and guides the reader's navigation of the site.
Some websites require a subscription to access some or all of their content. Examples of subscription websites include many business sites, parts of news websites, academic journal websites, gaming websites, message boards, web-based e-mail, social networking websites, websites providing real-time stock market data, and websites providing various other services (e.g. websites offering storing and/or sharing of images, files and so forth).

History

The World Wide Web (WWW) was created in 1989 by CERN physicist Tim Berners-Lee.[3] On 30 April 1993, CERN announced that the World Wide Web would be free to use for anyone.[4] Before the introduction of HTML and HTTP, other protocols such as file transfer protocol and the gopher protocol were used to retrieve individual files from a server. These protocols offer a simple directory structure which the user navigates and chooses files to download. Documents were most often presented as plain text files without formatting or were encoded in word processor formats.

[edit] Overview

Organized by function, a website may be
It could be the work of an individual, a business or other organization, and is typically dedicated to some particular topic or purpose. Any website can contain a hyperlink to any other website, so the distinction between individual sites, as perceived by the user, may sometimes be blurred.
Websites are written in, or dynamically converted to, HTML (Hyper Text Markup Language) and are accessed using a software interface classified as a user agent. Web pages can be viewed or otherwise accessed from a range of computer-based and Internet-enabled devices of various sizes, including desktop computers, laptops, PDAs and cell phones.
A website is hosted on a computer system known as a web server, also called an HTTP server, and these terms can also refer to the software that runs on these systems and that retrieves and delivers the web pages in response to requests from the website users. Apache is the most commonly used web server software (according to Netcraft statistics) and Microsoft's Internet Information Server (IIS) is also commonly used.

[edit] Static website

A static website is one that has web pages stored on the server in the format that is sent to a client web browser. It is primarily coded in Hypertext Markup Language (HTML).
Simple forms or marketing examples of websites, such as classic website, a five-page website or a brochure website are often static websites, because they present pre-defined, static information to the user. This may include information about a company and its products and services via text, photos, animations, audio/video and interactive menus and navigation.
This type of website usually displays the same information to all visitors. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time. Although the website owner may make updates periodically, it is a manual process to edit the text, photos and other content and may require basic website design skills and software.
In summary, visitors are not able to control what information they receive via a static website, and must instead settle for whatever content the website owner has decided to offer at that time.
They are edited using four broad categories of software:
  • Text editors, such as Notepad or TextEdit, where content and HTML markup are manipulated directly within the editor program
  • WYSIWYG offline editors, such as Microsoft FrontPage and Adobe Dreamweaver (previously Macromedia Dreamweaver), with which the site is edited using a GUI interface and the final HTML markup is generated automatically by the editor software
  • WYSIWYG online editors which create media rich online presentation like web pages, widgets, intro, blogs, and other documents.
  • Template-based editors, such as Rapidweaver and iWeb, which allow users to quickly create and upload web pages to a web server without detailed HTML knowledge, as they pick a suitable template from a palette and add pictures and text to it in a desktop publishing fashion without direct manipulation of HTML code.

[edit] Dynamic website

A dynamic website is one that changes or customizes itself frequently and automatically, based on certain criteria.
Dynamic websites can have two types of dynamic activity: Code and Content. Dynamic code is invisible or behind the scenes and dynamic content is visible or fully displayed.

[edit] Dynamic code

The first type is a web page with dynamic code. The code is constructed dynamically on the fly using active programming language instead of plain, static HTML.
A website with dynamic code refers to its construction or how it is built, and more specifically refers to the code used to create a single web page. A dynamic web page is generated on the fly by piecing together certain blocks of code, procedures or routines. A dynamically-generated web page would call various bits of information from a database and put them together in a pre-defined format to present the reader with a coherent page. It interacts with users in a variety of ways including by reading cookies recognizing users' previous history, session variables, server side variables etc., or by using direct interaction (form elements, mouse overs, etc.). A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user.

[edit] Dynamic content

The second type is a website with dynamic content displayed in plain view. Variable content is displayed dynamically on the fly based on certain criteria, usually by retrieving content stored in a database.
A website with dynamic content refers to how its messages, text, images and other information are displayed on the web page, and more specifically how its content changes at any given moment. The web page content varies based on certain criteria, either pre-defined rules or variable user input. For example, a website with a database of news articles can use a pre-defined rule which tells it to display all news articles for today's date. This type of dynamic website will automatically show the most current news articles on any given date. Another example of dynamic content is when a retail website with a database of media products allows a user to input a search request for the keyword Beatles. In response, the content of the web page will spontaneously change the way it looked before, and will then display a list of Beatles products like CD's, DVD's and books.

[edit] Purpose of dynamic websites

The main purpose of a dynamic website is automation. A dynamic website can operate more effectively, be built more efficiently and is easier to maintain, update and expand. It is much simpler to build a template and a database than to build hundreds or thousands of individual, static HTML web pages.

[edit] Software systems

There is a wide range of software systems, such as ANSI C servlets, Java Server Pages (JSP), the PHP and Perl programming languages, ASP.NET, Active Server Pages (ASP), YUMA and ColdFusion (CFML) that are available to generate dynamic web systems and dynamic sites. Sites may also include content that is retrieved from one or more databases or by using XML-based technologies such as RSS.
Static content may also be dynamically generated either periodically, or if certain conditions for regeneration occur (cached) in order to avoid the performance loss of initiating the dynamic engine on a per-user or per-connection basis.
Plug ins are available to expand the features and abilities of web browsers, which use them to show active content, such as Microsoft Silverlight, Adobe Flash, Adobe Shockwave or applets written in Java. Dynamic HTML also provides for user interactivity and realtime element updating within web pages (i.e., pages don't have to be loaded or reloaded to effect any changes), mainly using the Document Object Model (DOM) and JavaScript, support which is built-in to most modern web browsers.
Turning a website into an income source is a common practice for web developers and website owners. There are several methods for creating a website business which fall into two broad categories, as defined below.

[edit] Content-based sites

Some websites derive revenue by selling advertising space on the site (see Contextual advertising).

[edit] Product- or service-based sites

Some websites derive revenue by offering products or services for sale. In the case of e-commerce websites, the products or services may be purchased at the website itself, by entering credit card or other payment information into a payment form on the site. While most business websites serve as a shop window for existing brick and mortar businesses, it is increasingly the case that some websites are businesses in their own right; that is, the products they offer are only available for purchase on the web.
Websites occasionally derive income from a combination of these two practices. For example, a website such as an online auctions website may charge the users of its auction service to list an auction, but also display third-party advertisements on the site, from which it derives further income.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

BLOG

Komunitas Blogger

Komunitas blogger adalah sebuah ikatan yang terbentuk dari [para blogger] berdasarkan kesamaan-kesamaan tertentu, seperti kesamaan asal daerah, kesamaan kampus, kesamaan hobi, dan sebagainya. Para blogger yang tergabung dalam komunitas-komunitas blogger tersebut biasanya sering mengadakan kegiatan-kegiatan bersama-sama seperti kopi darat.
Untuk bisa bergabung di komunitas blogger, biasanya ada semacam syarat atau aturan yang harus dipenuhi untuk bisa masuk di komunitas tersebut, misalkan berasal dari daerah tertentu.

[sunting] Jenis-jenis blog

  • Blog politik: Tentang berita, politik, aktivis, dan semua persoalan berbasis blog (Seperti kampanye).
  • Blog pribadi: Disebut juga buku harian online yang berisikan tentang pengalaman keseharian seseorang, keluhan, puisi atau syair, gagasan jahat, dan perbincangan teman.
  • Blog bertopik: Blog yang membahas tentang sesuatu, dan fokus pada bahasan tertentu.
  • Blog kesehatan: Lebih spesifik tentang kesehatan. Blog kesehatan kebanyakan berisi tentang keluhan pasien, berita kesehatan terbaru, keterangan-ketarangan tentang kesehatan, dll.
  • Blog sastra: Lebih dikenal sebagai litblog (Literary blog).
  • Blog perjalanan: Fokus pada bahasan cerita perjalanan yang menceritakan keterangan-keterangan tentang perjalanan/traveling.
  • Blog riset: Persoalan tentang akademis seperti berita riset terbaru.
  • Blog hukum: Persoalan tentang hukum atau urusan hukum; disebut juga dengan blawgs (Blog Laws).
  • Blog media: Berfokus pada bahasan kebohongan atau ketidakkonsistensi media massa; biasanya hanya untuk koran atau jaringan televisi
  • Blog agama: Membahas tentang agama
  • Blog pendidikan: Biasanya ditulis oleh pelajar atau guru.
  • Blog kebersamaan: Topik lebih spesifik ditulis oleh kelompok tertentu.
  • Blog petunjuk (directory): Berisi ratusan link halaman website.
  • Blog bisnis: Digunakan oleh pegawai atau wirausahawan untuk kegiatan promosi bisnis mereka
  • Blog pengejawantahan: Fokus tentang objek diluar manusia; seperti anjing
  • Blog pengganggu (spam): Digunakan untuk promosi bisnis affiliate; juga dikenal sebagai splogs (Spam Blog)

[sunting] Budaya populer

Ngeblog (istilah bahasa Indonesia untuk blogging) harus dilakukan hampir setiap waktu untuk mengetahui eksistensi dari pemilik blog. Juga untuk mengetahui sejauh mana blog dirawat (mengganti template) atau menambah artikel. Sekarang ada lebih 10 juta blog yang bisa ditemukan di Internet.[rujukan?] Dan masih bisa berkembang lagi, karena saat ini ada banyak sekali software, tool, dan aplikasi Internet lain yang mempermudah para blogger (sebutan pemilik blog) untuk merawat blognya.selain merawat dan terus melakukan pembaharuan di blognya, para blogger yang tergolong baru pun masih sering melakukan blogwalking, yaitu aktivitas dimana para blogger meninggalkan link di blog atau situs orang lain seraya memberikan komentar. Beberapa blogger kini bahkan telah menjadikan blognya sebagai sumber pemasukan utama melalui program periklanan AdSense, posting berbayar, jualan link, afiliasi dan lain-lain. Sehingga kemudian muncullah istilah profesional blogger, atau problogger, orang yang menggantungkan hidupnya hanya dari aktivitas ngeblog.[rujukan?] karena memang faktanya banyak chanel-chanel pendapatan dana baik berupa dolar maupun rupiah dari aktivitas ngeblog ini.

[sunting] Resiko kejahatan

Karena blog sering digunakan untuk menulis aktivitas sehari-hari yang terjadi pada penulisnya, ataupun merefleksikan pandangan-pandangan penulisnya tentang berbagai macam topik yang terjadi dan untuk berbagi informasi - blog menjadi sumber informasi bagi para hacker, pencuri identitas, mata-mata, dan lain sebagainya. Banyak berkas-berkas rahasia dan penulisan isu sensitif ditemukan dalam blog-blog. Hal ini berakibat dipecatnya seseorang dari pekerjaannya, diblokir aksesnya, didenda, dan bahkan ditangkap.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

HOSTING

What is Web Hosting

What exactly is web hosting? - In a nutshell, web hosting is a like a folder or directory on your computer, except it's on a computer that's connected to the Web 24 hours a day, 7 days a week. And, anyone on the web can read what you put in it. To use a web host, you put your files in your space on the web host. Visitors can find your files by going to your web address (called a URL.) When a visitor makes a request to your URL, a web server "hears" that request and gets the files from the disk where your website lives, and shows them to the visitor. If you want to create a simple web page, or build a massive web store like Amazon's, you need web hosting. You build your pages using a web language like HTML, build your scripts (programs) using a server language like PERL or C#, and then you upload everything to your web host, tweak some values for your scripts (if you have scripts) and voila... your live. (That's a very, very simplified answer.)
Web hosting comes in different shapes and sizes. Which flavor is best for you depends on what you are trying to do; on what your "application" is. The major categories are: Shared Web Hosting, Virtual Private Hosting and Dedicated Hosting.
Within those categories you find subcategories such as e-commerce hosting and rich media hosting. At a basic level, the major category differences break down between cost and performance. Higher cost usually means higher performance, more tools and more resources. Higher performance often means increased maintenance on your part, though you can mitigate this by paying even more to have someone manage your hosting account.

Types and flavors of web hosting

On a cost basis, shared hosting is always the least expensive. - Shared hosting is generally aimed at beginners and intermediate users (though if your specific application doesn't require CGI or database access advanced users can save tons of money with shared hosting). As the name implies, Shared hosting is "sharing" the hosting environment. Usually your web site lives in a folder along side many other folks web sites, and the same web server process serves up all of the those sites on request. This means any site in a shared that acts as a bandwidth hog will take CPU and disk access time away from other neighboring sites in the environment. Many shared hosting providers work to mitigate these circumstances by controlling the bandwidth, file sizes and overall resource usage available to any one site. Upgrades in a shared hosting generally consist of making additional bandwidth, file size and CGI access available... basically allowing you to take up a higher priority among your neighbors. All in all this is a very good balance of cost versus performance.
Continuing the cost analysis angle; Virtual private hosting is the next major step up - It's the stop gap between shared and dedicated hosting. Virtual hosting still shares a machine or disk, but the web server software and indeed the entire operating system environment is usually isolated for each site in a virtual hosting . So, you might have a computer or disk with 10 sites on it, 10 different web servers for those sites, and 10 isolated operating environments. The advantages include better control of resource allocation and more enforceable distribution (i.e. neighbors who hog CPU and disk time in a shared environment have a more tightly controlled allocation of CPU and disk time in a virtual private server environment, so the number of cycles available to your processes are not diminished... here folks don't have to compete for each second, the allotments are usually fixed). Another advantage is that you usually have robust CGI and database accessibility... and if you have a CGI that accidentally runs an infinite loop, it won't suck up your neighbors CPU bandwidth allocations since the operating system environment is isolated. Consider this same scenario in a shared hosting environment where your CGI experiencing an infinite loop might lock up the system and prevent any other site in that shared environment from being served either ...very bad!
At the top of the cost pyramid is dedicated hosting. - This usually requires you have considerable technical skills at your disposal. Dedicated hosting basically means you have the whole machine or disk to yourself. It also can mean that when your web server falls down, you will have to restart it. Worse, it can mean if your site gets DDOS attacked that you might have to manage most if not all of the strategy to mitigate the attack. While shared hosting providers don't tend to highlight this facet, when one of their sites experiences a DoS attack, because it impacts the rest of the sites in their environment, they are highly motivated to mitigate the attack, and likely have highly skilled administrators available to do so. This is often a hidden advantage to hosting in a shared (or even a virtual private) environment. However, if your site is a frequent target of DOS attacks, your relationship with your hosting provider may be strained to the point of you being booted, or you being charged specifically to help offset the special costs associated with managing your site and it's impact on the rest of the shared users.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

DOMAIN

Domain name syntax

A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com.
  • The right-most label conveys the top-level domain; for example, the domain name www.example.com belongs to the top-level domain com.
  • The hierarchy of domains descends from the right to the left label in the name; each label to the left specifies a subdivision, or subdomain of the domain to the right. For example: the label example specifies a subdomain of the com domain, and www is a subdomain of example.com. This tree of labels may consist of 127 levels. Each label may contain up to 63 ASCII characters. The full domain name may not exceed a total length of 253 characters.[2] In practice, some domain registries may have shorter limits.
  • A hostname is a domain name that has at least one associated IP address. For example, the domain names www.example.com and example.com are also hostnames, whereas the com domain is not. However, other top-level domains, particularly country code top-level domains, may indeed have an IP address, and if so, they are also hostnames.

[edit] Top-level domains

The top-level domains (TLDs) are the highest level of domain names of the Internet. They form the DNS root zone of the hierarchical Domain Name System. Every domain name ends in a top-level or first-level domain label.
When the Domain Name System was created in the 1980s, the domain name space was divided into two main groups of domains.[3] The country code top-level domains (ccTLD) were primarily based on the two-character territory codes of ISO-3166 country abbreviations. In addition, a group of seven generic top-level domains (gTLD) was implemented which represented a set of categories of names and multi-organizations.[4] These were the domains GOV, EDU, COM, MIL, ORG, NET, and INT.
During the growth of the Internet, it became desirable to create additional generic top-level domains. As of October 2009, there are 21 generic top-level domains and 250 two-letter country-code top-level domains.[5] In addition, the ARPA domain serves technical purposes in the infrastructure of the Domain Name System.
During the 32nd International Public ICANN Meeting in Paris in 2008,[6] ICANN started a new process of TLD naming policy to take a "significant step forward on the introduction of new generic top-level domains." This program envisions the availability of many new or already proposed domains, as well a new application and implementation process.[7] Observers believed that the new rules could result in hundreds of new top-level domains to be registered.[8]
An annotated list of top-level domains in the root zone database is published at the IANA website at http://www.iana.org/domains/root/db/ and a Wikipedia list exists.

[edit] Second-level and lower level domains

Below the top-level domains in the domain name hierarchy are the second-level domain (SLD) names. These are the names directly to the left of .com, .net, and the other top-level domains. As an example, in the domain en.wikipedia.org, wikipedia is the second-level domain.
Next are third-level domains, which are written immediately to the left of a second-level domain. There can be fourth- and fifth-level domains, and so on, with virtually no limitation. An example of an operational domain name with four levels of domain labels is www.sos.state.oh.us. The www preceding the domains is the host name of the World-Wide Web server. Each label is separated by a full stop (dot). 'sos' is said to be a sub-domain of 'state.oh.us', and 'state' a sub-domain of 'oh.us', etc. In general, subdomains are domains subordinate to their parent domain. An example of very deep levels of subdomain ordering are the IPv6 reverse resolution DNS zones, e.g., 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa, which is the reverse DNS resolution domain name for the IP address of a loopback interface, or the localhost name.
Second-level (or lower-level, depending on the established parent hierarchy) domain names are often created based on the name of a company (e.g., bbc.co.uk), product or service (e.g., gmail.com). Below these levels, the next domain name component has been used to designate a particular host server. Therefore, ftp.wikipedia.org might be an FTP server, www.wikipedia.org would be a World Wide Web server, and mail.wikipedia.org could be an email server, each intended to perform only the implied function. Modern technology allows multiple physical servers with either different (cf. load balancing) or even identical addresses (cf. anycast) to serve a single hostname or domain name, or multiple domain names to be served by a single computer. The latter is very popular in Web hosting service centers, where service providers host the websites of many organizations on just a few servers.
The hierarchical DNS labels or components of domain names are separated in a fully qualified name by the full stop (dot, .).

[edit] Internationalized domain names

The character set allowed in the Domain Name System initially prevented the representation of names and words of many languages in their native scripts or alphabets. ICANN approved the Punycode-based Internationalized domain name (IDNA) system, which maps Unicode strings into the valid DNS character set. For example, københavn.eu is mapped to xn--kbenhavn-54a.eu. Some registries have adopted IDNA.

[edit] History

On 15 March 1985, the first commercial Internet domain name (.com) was registered under the name Symbolics.com by Symbolics Inc., a computer systems firm in Cambridge, Massachusetts.
By 1992 fewer than 15,000 dot.com domains were registered.
In December 2009 there were 192 million domain names.[9] A big fraction of them are in the .com TLD, which as of March 15, 2010 had 84 million domain names, including 11.9 million online business and e-commerce sites, 4.3 million entertainment sites, 3.1 million finance related sites, and 1.8 million sports sites.[10]

[edit] Domain name registration

The right to use a domain name is delegated by domain name registrars which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization operating a registry. A registry is responsible for maintaining the database of names registered within the TLD it administers. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the whois protocol.
Registries and registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often this transaction is termed a sale or lease of the domain name, and the registrant may sometimes be called an "owner", but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly, authorized users are known as "registrants" or as "domain holders".
ICANN publishes the complete list of TLD registries and domain name registrars. Registrant information associated with domain names is maintained in an online database accessible with the WHOIS service. For most of the 250 country code top-level domains (ccTLDs), the domain registries maintain the WHOIS (Registrant, name servers, expiration dates, etc.) information.
Some domain name registries, often called network information centers (NIC), also function as registrars to end-users. The major generic top-level domain registries, such as for the COM, NET, ORG, INFO domains and others, use a registry-registrar model consisting of hundreds of domain name registrars (see lists at ICANN or VeriSign). In this method of management, the registry only manages the domain name database and the relationship with the registrars. The registrants (users of a domain name) are customers of the registrar, in some cases through additional layers of resellers.
In the process of registering a domain name and maintaining authority over the new name space created, registrars use several key pieces of information connected with a domain:
  • Administrative contact. A registrant usually designates an administrative contact to manage the domain name. The administrative contact usually has the highest level of control over a domain. Management functions delegated to the administrative contacts may include management of all business information, such as name of record, postal address, and contact information of the official registrant of the domain and the obligation to conform to the requirements of the domain registry in order to retain the right to use a domain name. Furthermore the administrative contact installs additional contact information for technical and billing functions.
  • Technical contact. The technical contact manages the name servers of a domain name. The functions of a technical contact include assuring conformance of the configurations of the domain name with the requirements of the domain registry, maintaining the domain zone records, and providing continuous functionality of the name servers (that leads to the accessibility of the domain name).
  • Billing contact. The party responsible for receiving billing invoices from the domain name registrar and paying applicable fees.
  • Name servers. Most registrars provide two or more name servers as part of the registration service. However, a registrant may specify its own authoritative name servers to host a domain's resource records. The registrar's policies govern the number of servers and the type of server information required. Some providers require a hostname and the corresponding IP address or just the hostname, which must be resolvable either in the new domain, or exist elsewhere. Based on traditional requirements (RFC 1034), typically a minimum of two servers is required.
Domain names are often seen in analogy to real estate in that (1) domain names are foundations on which a website (like a house or commercial building) can be built and (2) the highest "quality" domain names, like sought-after real estate, tend to carry significant value, usually due to their online brand-building potential, use in advertising, search engine optimization, and many other criteria.
A few companies have offered low-cost, below-cost or even cost-free domain registrations with a variety of models adopted to recoup the costs to the provider. These usually require that domains be hosted on their website within a framework or portal that includes advertising wrapped around the domain holder's content, revenue from which allows the provider to recoup the costs. Domain registrations were free of charge when the DNS was new. A domain holder can give away or sell infinite number of subdomains under their domain name. For example, the owner of example.org could provide subdomains such as foo.example.org and foo.bar.example.org to interested parties.
Because of the popularity of the Internet, many desirable domain names are already assigned and users must search for other acceptable names, using Web-based search features, or WHOIS and dig operating system tools. Many registrars have implemented Domain name suggestion tools which search domain name databases and suggest available alternative domain names related to keywords provided by the user.

[edit] Resale of domain names

The business of resale of registered domain names is known as the domain aftermarket. Various factors influence the perceived value or market value of a domain name.
As of 2004, according to Guinness World Records and MSNBC, the most expensive domain name sales on record were:[11]
  • Business.com resold for $350 million in July 2007 [12]
  • Business.com for $7.5 million in December 1999
  • AsSeenOnTv.com for $5.1 million in January 2000
  • Altavista.com for $3.3 million in August 1998
  • Wine.com for $2.9 million in September 1999
  • CreditCards.com for $2.75 million in July 2004
  • Autos.com for $2.2 million in December 1999
  • Sex.com for $13 million[13]
  • Toys.com: Toys 'R' Us by auction for $5.1 million in 2009[14]

[edit] Domain name confusion

Intercapping is often used to emphasize the meaning of a domain name. However, DNS names are case-insensitive, and some names may be misinterpreted in certain uses of capitalization, creating slurls. For example: Who Represents, a database of artists and agents, chose whorepresents.com, which can be misread as whore presents. Similarly, a therapists' network is named therapistfinder.com. In such situations, the proper meaning may be clarified by use of hyphens in the domain name. For instance, Experts Exchange, a programmers' discussion site, for a long time used expertsexchange.com, but ultimately changed the name to experts-exchange.com.
Intellectual property entrepreneur Leo Stoller threatened to sue the owners of StealThisEmail.com on the basis that, when read as stealthisemail.com, it infringed on claimed (but invalid) trademark rights to the word "stealth".[15]

[edit] Use in web site hosting

A domain name is a component of a Uniform Resource Locator (URL) used to access web sites, for example:
URL: http://www.example.net/index.html
Top-level domain name: .net
Second-level domain name: example.net
Host name: www.example.net
A domain name may point to multiple IP addresses to provide server redundancy for the services delivered. This is used for large, popular web sites. More commonly, however, one server at a given IP address may also host multiple web sites in different domains. Such address overloading enables virtual web hosting commonly used by large web hosting services to conserve IP address space. It is possible through a feature in the HTTP version 1.1 protocol, but not in HTTP 1.0, which requires that a request identifies the domain name being referenced.

[edit] Abuse and regulation

Critics often claim abuse of administrative power over domain names. Particularly noteworthy was the VeriSign Site Finder system which redirected all unregistered .com and .net domains to a VeriSign webpage. For example, at a public meeting with VeriSign to air technical concerns about SiteFinder,[16] numerous people, active in the IETF and other technical bodies, explained how they were surprised by VeriSign's changing the fundamental behavior of a major component of Internet infrastructure, not having obtained the customary consensus. SiteFinder, at first, assumed every Internet query was for a website, and it monetized queries for incorrect domain names, taking the user to VeriSign's search site. Unfortunately, other applications, such as many implementations of email, treat a lack of response to a domain name query as an indication that the domain does not exist, and that the message can be treated as undeliverable. The original VeriSign implementation broke this assumption for mail, because it would always resolve an erroneous domain name to that of SiteFinder. While VeriSign later changed SiteFinder's behaviour with regard to email, there was still widespread protest about VeriSign's action being more in its financial interest than in the interest of the Internet infrastructure component for which VeriSign was the steward.
Despite widespread criticism, VeriSign only reluctantly removed it after the Internet Corporation for Assigned Names and Numbers (ICANN) threatened to revoke its contract to administer the root name servers. ICANN published the extensive set of letters exchanged, committee reports, and ICANN decisions.[17]
There is also significant disquiet regarding the United States' political influence over ICANN. This was a significant issue in the attempt to create a .xxx top-level domain and sparked greater interest in alternative DNS roots that would be beyond the control of any single country.[18]
Additionally, there are numerous accusations of domain name front running, whereby registrars, when given whois queries, automatically register the domain name for themselves. Recently, Network Solutions has been accused of this.[19]

[edit] Truth in Domain Names Act

In the United States, the Truth in Domain Names Act of 2003, in combination with the PROTECT Act of 2003, forbids the use of a misleading domain name with the intention of attracting Internet users into visiting Internet pornography sites.
The Truth in Domain Names Act follows the more general Anticybersquatting Consumer Protection Act passed in 1999 aimed at preventing typosquatting and deceptive use of names and trademarks in domain names.

[edit] Fictitious domain name

A fictitious domain name is a domain name used in a work of fiction or popular culture to refer to a domain that does not actually exist.
Domain names used in works of fiction have often been registered in the DNS, either by their creators or by cybersquatters attempting to profit from it.[citation needed] This phenomenon prompted NBC to purchase the domain name Hornymanatee.com after talk-show host Conan O'Brien spoke the name while ad-libbing on his show. O'Brien subsequently created a website based on the concept and used it as a running gag on the show.[20]

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Keyword stuffing

Keyword stuffing

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Keyword stuffing is considered to be an unethical search engine optimization (SEO) technique. Keyword stuffing occurs when a web page is loaded with keywords in the meta tags or in content. The repetition of words in meta tags may explain why many search engines no longer use these tags.
Keyword stuffing had been used in the past to obtain maximum search engine ranking and visibility for particular phrases. This method is completely outdated and adds no value to rankings today. In particular, Google no longer gives good rankings to pages employing this technique.
Hiding text from the visitor is done in many different ways. Text colored to blend with the background, CSS "Z" positioning to place text "behind" an image — and therefore out of view of the visitor — and CSS absolute positioning to have the text positioned far from the page center are all common techniques. By 2005, many invisible text techniques were easily detected by major search engines.
"Noscript" tags are another way to place hidden content within a page. While they are a valid optimization method for displaying an alternative representation of scripted content, they may be abused, since search engines may index content that is invisible to most visitors.
Sometimes inserted text includes words that are frequently searched (such as "sex"), even if those terms bear little connection to the content of a page, in order to attract traffic to advert-driven pages.
In the past, keyword stuffing was considered to be either a white hat or a black hat tactic, depending on the context of the technique, and the opinion of the person judging it. While a great deal of keyword stuffing was employed to aid in spamdexing, which is of little benefit to the user, keyword stuffing in certain circumstances was not intended to skew results in a deceptive manner. Whether the term carries a pejorative or neutral connotation is dependent on whether the practice is used to pollute the results with pages of little relevance, or to direct traffic to a page of relevance that would have otherwise been de-emphasized due to the search engine's inability to interpret and understand related ideas. This is no longer the case. Search engines now employ themed, related keyword techniques to interpret the intent of the content on a page.
With relevance to keyword stuffing, it is quoted by the largest of search engines that they recommend Keyword Research and use (with respect to the quality content you have to offer the web), to aid their visitors in the search of your valuable material. To prevent Keyword Stuffing you should wisely use keywords in respect with SEO, Search Engine Optimization. It could be best described as keywords should be reasonable and necessary, yet it is acceptable to assist with proper placement and your targeted effort to achieve search results. Placement of such words in the provided areas of HTML are perfectly allowed and reasonable. Google discusses keyword stuffing as Randomly Repeated Keywords

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

about SEO

Comments:Add

SEO Guide for Designers

TechSmith
According to a poll I conducted, just over 1 out of 10 people don’t think SEO (Search Engine Optimization) is mandatory as a designer; and what really surprised me is about 24% don’t even know what SEO is! If you’re among the quarter of people who don’t know what SEO is or understand how it can help you, you should really read this article. This is an SEO guide for designers who want to learn about making it easier for websites or blogs to be found by search engines. I’ll explain the common mistakes made by designers and developers. Then I’ll provide some basic tips that you should be practicing to optimize your site for search engines.
As a Web Designer, do you think SEO is required?
Loading ... Loading …

Why Should You Learn About SEO?

  • SEO isn’t only for online marketers. As a web designer or frontend developer, most on-site SEO is your responsibility.
  • If your site is not search engine friendly, you might be losing a lot of traffic that you’re not even aware of. Remember, besides visitors typing in "www.yourwebsite.com" and backlink referrals; search engines are the only way people can find your site.
  • There are many benefits of getting a high ranking site. Let’s use ndesign-studio.com for example. I have, on average, about 14,000 visitors a day. About 40 - 45% of that traffic comes from search engines (about 6000+ referrals a day). Imagine, without search engine referrals, I would be losing thousands of visitors everyday. That means, I’m risking losing potential clients too.
  • SEO is also a value-added service. As a web designer/developer you can sell your SEO skills as an extended service.

The Basics: How Search Engines Work?

How search engines work
First, let’s look at how crawler-based search engines work (both Google and Yahoo fall in this category). Each search engine has its own automated program called a "web spider" or "web crawler" that crawls the web. The main purpose of the spider is to crawl web pages, read and collect the content, and follow the links (both internal and external). The spider then deposits the information collected into the search engine’s database called the index.
When searchers enter a query in the search box of a search engine, the search engine’s job is to find the most relevant results to the query by matching the search query to the information in its index.
What makes or breaks a search engine is how well it answers your question when you perform a search. That’s based on what’s called the search engine algorithm which is basically a bunch of factors that the search engine uses to say “hey is this page RELEVANT or NOT?”. The higher your page ranks for these factors (yes some factors are more important than others) than the higher your page will get displayed in the search engine result pages.

Your Job As a Search Engine Optimizer

SEO jobs
Each search engine has its own algorithm in ranking web pages. Understanding the general factors that influence the algorithm can affect your search result position, and this is what SEO experts are hired for. An SEO’s job has two aspects: On-Site and Off-Site.
On-Site SEO: are the things that you can do on your site, such as: HTML markups, target keywords, internal linking, site structure, etc.
Off-Site SEO: are the things that you have much less control of, such as: how many backlinks you get and how people link to your site.
This is a guide for designers and developers. The main concern is the On-Site aspects. Secretly though, if you do your job right… and design a beautiful site… and/or produce useful content… you’ll get Off-Site backlinks and social bookmarks without even lifting a finger.

Top 9 SEO Mistakes Made by Designers and Developers

1. Splash Page

Splash page
I’ve seen this mistake many times where people put up just a big banner image and a link "Click here to enter" on their homepage. The worst case — the "enter" link is embedded in the Flash object, which makes it impossible for the spiders to follow the link.
This is fine if you don’t care about what a search engine knows about your site; otherwise, you’re making a BIG mistake. Your homepage is probably your website’s highest ranking page and gets crawled frequently by web spiders. Your internal pages will not appear in the search engine index without the proper linking structure to internal pages for the spider to follow.
Your homepage should include (at minimum) target keywords and links to important pages.

2. Non-spiderable Flash Menus

Many designers make this mistake by using Flash menus such as those fade-in and animated menus. They might look cool to you but they can’t be seen by the search engines; and thus the links in the Flash menu will not be followed.

3. Image and Flash Content

Web spiders are like a text-based browser, they can’t read the text embedded in the graphic image or Flash. Most designers make this mistake by embedding the important content (such as target keywords) in Flash and image.

4. Overuse of Ajax

A lot of developers are trying to impress their visitor by implementing massive Ajax features (particularly for navigation purposes), but did you know that it is a big SEO mistake? Because Ajax content is loaded dynamically, so it is not spiderable or indexable by search engines.
Another disadvantage of Ajax — since the address URL doesn’t reload, your visitor can not send the current page to their friends.

5. Versioning of Theme Design

For some reason, some designers love to version their theme design into sub level folders (ie. domain.com/v2, v3, v4) and redirect to the new folder. Constantly changing the main root location may cause you to lose backlink counts and ranking.

6. “Click Here” Link Anchor Text

You probably see this a lot where people use "Click here" or "Learn more" as the linking text. This is great if you want to be ranked high for "Click Here". But if you want to tell the search engine that your page is important for a topic, than use that topic/keyword in your link anchor text. It’s much more descriptive (and relevant) to say “learn more about {keyword topic}”
Warning: Don’t use the EXACT same anchor text everywhere on your website. This can sometimes be seen as search engine spam too.

7. Common Title Tag Mistakes

Same or similar title text:
Every page on your site should have a unique <title> tag with the target keywords in it. Many developers make the mistake of having the same or similar title tags throughout the entire site. That’s like telling the search engine that EVERY page on your site refers to the same topic and one isn’t any more unique than the other.
One good example of bad Title Tag use would be the default WordPress theme. In case you didn’t know, the title tag of the default WordPress theme isn’t
that useful: Site Name > Blog Archive > Post Title. Why isn’t this search engine friendly? Because every single blog post will have the same text "Site Name > Blog Archive >" at the beginning of the title tag. If you really want to include the site name in the title tag, it
should be at the end: Post Title | Site Name.
Exceeding the 65 character limit:
Many bloggers write very long post titles. So what? In search engine result pages, your title tag is used as the link heading. You have about 65 characters (including
spaces) to get your message across or risk it getting cutoff.
Keyword stuffing the title:
Another common mistake people tend to make is overfilling the title tag with keywords. Saying the same thing 3 times doesn’t make you more relevant. Keyword stuffing in the Title Tag is looked at as search engine spam (not good). But it might be smart to repeat the same word in different ways:
    "Photo Tips & Photography Techniques for Great Pictures"
“Photo” and “Photography” are the same word repeated twice but in different ways because your audience might use either one when performing a search query.

8. Empty Image Alt Attribute

You should always describe your image in the alt attribute. The alt attribute is what describes your image to a blind web user. Guess what? Search engines can’t see images so your alt attribute is a factor in illustrating what your page is relevant for.
Hint: Properly describing your images can help your ranking in the image search results. For example, Google image search brings me hundreds of referrals everyday for the search terms "abstract" and "dj".

9. Unfriendly URLs

Most blog or CMS platforms have a friendly URL feature built-in, however, not every blogger is taking advantage of this. Friendly URL’s are good for both your human audience and the search engines. The URL is also an important spot where your keywords should appear.
Example of Friendly URL: domain.com/page-title
Example of Dynamic URL: domain.com/?p=12356

General SEO Do’s and Don’ts

Let me tell you WHAT TO DO by telling you WHAT NOT TO DO:

Don’t Ignore Your Audience

Write about topics your audience cares about. Like what? Find out, by conducting a poll (like I did), scan some relevant bulletin boards or forums, look for common topics in customer emails, or do some keyword research. There are great free keyword tools like the Google Keyword Tool or SEO Book’s Keyword Tool and loads more. The plan is not to spend your life doing keyword research but just to get a general idea of what your visitors are interested in.

Don’t Be Dense About Keyword Density

Keyword density
Once you have a topic for readers; help search engines find it. Keyword Density is the number of times a keyword appears in a page compared to the total number of words. You want to make sure your keywords are included in the crucial areas:
  • the Title Tag
  • the Page URL (friendly URL)
  • the Main Heading (H1 or H2)
  • the first paragraph of content.
  • at least 3 times in the body content (more or less depending on amount of content and if and only if it makes sense).
Most people aim for a keyword density of 2% (i.e. use the keyword 2 times for every 100 words). But what if your keyword phrase is “SEO for Web Designers and Web Developers” how many times can you repeat that before it sounds just plain unnatural? Write for your readers not for search engines. If you follow the tips
in this article you’ll be writing naturally for your readers; which works for the search engines too.
Warning: Do not over fill your page with the same keywords or you might be penalized by search engines for keyword stuffing.

Don’t Ignore Relatives

In this article, it makes sense to mention topics like “keyword research”, “search engine crawlers” and “title tag use”, but what if I mentioned a highly trafficked term like “cell phone plans”… kind of out of context right? So use other keywords and topics that make sense to your audience, the search engine measures keyword relations to determine relevancy too.
  • Cars and Tires (yes)
  • Web Design and Flying Monkeys (no…well sometimes)

Don’t Be Afraid of Internal Linking

Do you want the search engine to see every page on your website? Help the search engine spider do its job. There should be a page (like a sitemap or
blog archives) that links to all the pages on your site.
Tip: You can promote the more important pages by inserting text links within body content. Make sure you use relevant linking text and avoid using "click here" (as mentioned earlier).

Don’t Ignore Broken Links

404 not found error
You should always search for and fix the broken links on your site. If you’ve removed a page or section, you can use the robot.txt to prevent the spiders crawling and indexing the broken links. If you have moved a page or your entire website, you can use the 301 .htaccess to redirect to a new URL.
Tips: You can use the Google Webmaster Tool to find broken links and your 404 Not Found errors.

Don’t Be Inconsistent With Your Domain URL

To search engines, a www and a non-www URL are considered two different URLs. You should always keep your domain and URL structure consistent. If you start promoting your site without the "www", stick with it.

Don’t Be Scared of Semantic Coding

Semantic and standard coding not only can make your site cleaner, but it also allows the search engines to read your page better.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS