What time is it? – About NTP

In this blog post we’ll talk about time, how it works, why it’s important to computers, and how NTP can be used to manage the time on computer systems.

“You may delay, but time will not.”
― Benjamin Franklin

Intro to NTP

NTP (Network Time Protocol) allows computers on a network to synchronise their system clocks, with accuracy of as little as a few milliseconds. It does this by exchanging packets over the network and performing calculations based on the contents of these packets. Here is a simplified breakdown of this process, where two systems A (client) and B (server), synchronise their time:

  1. System A inserts it’s current time into a packet, and sends this to system B
  2. Upon receiving the packet from system A, system B then inserts it’s current time into the packet and sends it back to system A
  3. When system system A receives this packet, it will use the contents of the packet to estimate the time difference between the systems
  4. System A will use this time difference to adjust the system time, so that it is in sync with system B

This is often repeated multiple times, with every iteration resulting in the times of the systems getting closer and closer together. The above steps assume that system B’s time is correct.

Why is NTP needed?

You may be wondering why computers go through so much “trouble” to synchronise their clocks, or why there is an entire protocol dedicated to it; people’s watches are often out by a couple of minutes, and they manage just fine, why are computers any different? Well that’s just it, computers are very different. The smallest unit of time most people use every day is seconds. They microwave their lunch for 45 seconds, they wait 40 seconds for their tea to brew, they call someone back in 30 seconds. That’s fine, and it doesn’t really matter if you cook your lunch for a few seconds longer, as not much happens in a second.

However, a second for a computer is a huge amount of time. A moderately powerful modern computer can perform 14,000,000,000 (that’s 14 billion) operations every second. So a one second disparity between the times on systems could result in an extra 14 billion operations taking place. To be completely fair, each of these many operations represents very little in the grand scheme of things, but the example still holds: computers are extremely time-sensitive.

What happens when system times are out of sync?

When the times on systems are out of sync, some really odd and horrible things can happen:

  • Logs on different systems won’t correspond to each other. Lets say your application breaks, and you want to look through the logs and see what the problem was. You know the issue happened at, for example, 03:17:54. When you check the logs, you look for the entries at 3AM and try to figure out what went wrong. What if one of the systems’ time is wrong? Even though a log entry says something happened at 3AM, you could actually be looking at what happened at 4AM. This can me manageable if you’re only comparing logs between 2 systems, but when you start dealing with more and more systems, this becomes impossible pretty quickly.
  • Tracing emails can become very difficult. When you send an email, chances are it passes through multiple servers before reaching its destination. If you want to debug an email issue, you’re probably going to be checking headers (additional bits of data transmitted with the message), which contain timestamps added by each server the email passes through. If the times are out of sync on the systems, it can make it really difficult to trace the path of a message.
  • CRON jobs could run at the wrong time. Let’s say you have a CRON job that runs at 9PM. Its purpose is to, for example, kick all users off of the system and backup their files from that day. But, what if the system time is wrong, and this CRON ends up running at 2PM instead? People would be kicked off of the system in the middle of the working day, potentially resulting in a big loss of data and/or productivity.
  • Authentication services can be affected. Lots of two-factor authentication systems work on the idea of a one-time password. This is usually calculated based on a shared secret key, and the current time stamp. In order to authenticate successfully, a user needs to input the one-time password that the server is expecting. What happens if the client and server’s system times are different? Authentication fails, because the server is expecting a different key than the one provided by the user.

How to set up NTP on a Linux system & why servers should use UTC regardless of time zone

Setting up a Linux system to synchronise its system clock is really easy, and we’d always recommend doing this, especially on servers.

First, make sure you have an NTP client installed on your system. Most systems will come with one installed by default. A lot of the servers we manage run Ubuntu, so I’ll show you how to check an NTP client is installed on an Ubuntu system. Run this command in a terminal:

dpkg --list | grep ntp

You should see the package ntp listed. If nothing is returned, then you don’t have an NTP client installed. You can install it by running the following command with root privileges

sudo apt-get install ntp

The most important thing to configure with NTP is which servers to synchronise time with. These are defined in the /etc/ntp.conf file. If you check this file, you’ll find lines looking like the following:

server 0.ubuntu.pool.ntp.org
server 1.ubuntu.pool.ntp.org
server 2.ubuntu.pool.ntp.org
server 3.ubuntu.pool.ntp.org

These lines specify which NTP servers should be used. We typically change these to use the “standard” UK time-servers, like so:

server 0.uk.pool.ntp.org
server 1.uk.pool.ntp.org
server 2.uk.pool.ntp.org
server 3.uk.pool.ntp.org

These servers are all members of the NTP pool project. The NTP pool is a collection of publicly accessible NTP servers, which anyone can join (though there are a few requirements). We at Dogsbody Technology Ltd actually have a number of our own servers in the pool, providing time synchronisation capabilities for everybody to use. This helps to keep everybody’s system times in sync.

If you’ve having issues with NTP, or anything else for that matter, on your servers, then please get in touch and we’ll be happy to help.

Feature image made by Sean MacEntee licensed CC BY 2.0

HSTS Header

HTTPS Everywhere

“HTTPS Everywhere” is an increasingly popular trend among websites which gives added security, speed and SEO benefits. In August 2014, Google announced that it would be adjusting it’s search engine ranking algorithm to benefit HTTPS only sites, this was one of the key announcements that started the trend of sites going HTTPS everywhere. There’s also been numerous leaks and blog posts talking about the NSA & GCHQ intercepting communications to and from insecure HTTP sites.

In the past, one of reasons websites weren’t HTTPS everywhere was due to the added latency from the overhead of the HTTPS connection. With a slow internet connection and slower servers by todays standard this caused the sites to become sluggish which obviously isn’t great from a user experience point of view. Now that bandwidth and server performance has improved, the overhead is negligible, there have also been improvements such as SPDY and HTTP/2 which can drastically improve a websites performance over HTTPS, we will be covering how these work in future blog posts.

There are a few steps you can do to get your website running HTTPS everywhere:

  • Redirecting all HTTP requests to HTTPS; this can be done in your apache or nginx configuration and will tell web browsers that any request they make for content over HTTP should be redirected to the HTTPS equivalent URL. Ideally you would use a 301 (permanent) redirect for this, redirecting HTTP requests to HTTPS is something we do for the Dogsbody Technology site.
  • Add the HSTS (HTTP Strict Transport Security) header to your website; again this is done in your apache or nginx configuration. This header tells browsers that it should only access the website over HTTPS, the browser will make sure not to request HTTP pages until the “max-age” time is reached (how long the browser should cache the HSTS setting for). There is also an option “includeSubdomains” which tells the browser any subdomain on for the site should also be served over HTTPS, you should be careful when setting this if you have any subdomains that won’t work over HTTPS. We don’t include subdomains in our HSTS settings as we have a few subdomains out of our control that can’t be served over HTTPS.
  • The last thing you should do, only if you have the “includeSubdomains” setting mentioned above is add your website to the HSTS preload list. The HSTS preload list is a list of domains included by browsers that will serve over HTTPS by default without having to perform an initial HTTP request to the website. For this to work you will also need an additional “preload” option specified in your web servers HSTS configuration. You can submit your site to the HSTS preload list here.

Another good option is the HTTPS Everywhere browser plugin from the EFF, it works to achieve the same result as using HSTS preload and act as a list of rules browsers should follow for websites. It allows a finer grain control than HSTS and is perfect for domains like ours where we can’t include every subdomain, you can write your own ruleset for the plugin and do a git pull request to get your website in the next release they do. You can see our pull request where we added the rules for dogsbodytechnology.com & dogsbodyhosting.net and some specific subdomains.

Once you’ve done all of the above steps you can be pretty happy that your site is HTTPS everywhere, and the majority of all traffic to your website will be served over HTTPS (some older browsers don’t support the HSTS header).

If you think going HTTPS everywhere is the next step for you be sure to get in contact with us and we can help you achieve that!

The Dark Art of Email Deliverability

We often get questions over the technologies which help get your legitimate email delivered.  There are three main ones:

  • SPF – Used to lock down email sources to your servers
  • DKIM – To authenticate your sent emails
  • DMARC – For real world email reporting and DKIM enforcement

Email is vital to any organisation. Ensuring your emails are delivered can seem like a dark art. There are many systems at play to cut down on spam and other fraudulent emails. Over 80% of the email we receive at Dogsbody Technology is filtered out as SPAM. It can be hard for a system to recognise your legitimate email.

This makes email one of the more difficult technologies to get right.

“The Dark Arts are many, varied, ever-changing, and eternal”.
– Professor Severus Snape

 

Foreword

Our example features three characters: Alice, Bob and Sybil.

  • Alice is sending emails to Bob.
  • Bob receiving emails through Google’s Gmail.
  • Sybil is sending emails to Bob pretending to be Alice.

SPF, DKIM and DMARC is checked by the receiving email system and will affect how it handles the email. Larger email providers: Gmail, Hotmail, Yahoo, etc., are more likely to check the use of these technologies as they suffer from more SPAM than smaller hosts.

EmailBlogPost4

Sender Policy Framework; SPF

SPF is a DNS record which identifies all email sources permitted for your domain.

When an email is received the SPF record is looked up by the receiving mail server and the source IP from the email is then cross-referenced to check that it is in the SPF record.

Example SPF record for Alice’s server, example.com:

example.com. IN TXT "v=spf1 ip4:198.51.100.13 ~all"

e.g. when Alice sends an email from her domain (example.com) to Bob, Gmail receives this email and gets the source IP (198.51.100.13). Gmail then looks up the SPF record for ‘example.com’ which lists ‘198.51.100.13’ as an authorised server. Gmail knows Alice has allowed this and passes the email onto Bob.

When Sybil emails Bob as ‘example.com’ (Alice’s domain), Gmail receives the SPAM email and gets the emails source IP (203.0.113.37). When Gmail looks up the SPF record for ‘example.com’ it doesn’t list ‘203.0.113.37’ so Gmail will flag the email as SPAM.

This means that all sources sending email for your domain needs to be listed. This may include Mandrill, Google Apps for business, Socket labs, AWS SES, FreeAgent, etc.

DomainKey Identified Mail; DKIM

DKIM is a signature added to the header of an email before it is sent into the public internet. This signature shows that you are taking ownership of your email and allows any recipients to prove that.

The DKIM signature is generated with a private key and is based on the email’s contents, this make it unique to you and your email. For the receiving server to authenticate the signature a public key related to the private key is published in a DNS record. With the public key the receiving server can confirm that the DKIM signature was generated by your key. If the public key doesn’t match the signature the email will be considered as SPAM.

Example DKIM record for Alice’s server, example.com:

selector._domainkey.example.com IN TXT "v=DKIM1; p=MIGfMA0GCSq..."

For example when Alice sends an email to Bob, her server will generate and add a DKIM header to it. When Gmail receives this email it sees the DKIM header and looks up the DKIM public key record. It will check the DKIM signature against the public key. Since the email was sent through Alice’s servers the signature matches and gmail will pass the email onto Bob.

When Sybil emails Bob as Alice, Sybil’s server can generate a DKIM header, but when Gmail receives it and looks up the Signature against example.com (Alice’s) DKIM public key it will not match and Gmail will flag the email as spam. This also means that if Sybil intercepts a signed email before it reaches Gmail and changes its contents, the signature will no longer match which will flag Gmail that it had been changed.

Note that DKIM is only checked reactively, when the header is seen in an email. This is unlike SPF records which Gmail will check pro-actively.

 

Domain-based Message Authentication, Reporting & Conformance; DMARC

DMARC is a DNS record containing information regarding which anti-spam technologies the sending domain is using, and telling receivers to report back what they see.

When an email is received the receiving server looks up the DMARC record for the sender. This record includes flags detailing if SPF and/or DKIM are in use. The receiving server is then able to use this extra information to ensure that the received email is authenticated as the sender intended. If an email comes in which doesn’t meet the SPF and DKIM information provided by the DMARC record, the DMARC record also specifies guidance of how the receiver should act in the case of an authentication failure. e.g. If the message should be let through, put in junk or rejected completely.

Additionally, each ISP that supports DMARC is encouraged to send a report to the contact specified in the DMARC record. The report shows totals of email received and how the ISP dealt with these emails. The ISP can send a report at any time after it starts receiving emails, this is often daily.

Example DMARC record for Alice’s server, example.com:

_dmarc.example.com IN TXT "v=DMARC1; p=quarantine; rua=mailto:dmarc@example.com;"

e.g. When Alice sends an email to Bob, Gmail will receive her email and look up the example.com DMARC record. Gmail will then check that the email it received uses DKIM and SPF to authenticate as set in the DMARC record. It does and Gmail will pass it onto Bob.

If Sybil sends an email as example.com to Bob, Gmail will receive it and look up the DMARC record for example.com. It then checks that SPF and DKIM were used to authenticate the email and it will fail as neither SPF or DKIM will match the DMARC record. Gmail may then follow the DMARC policy ‘quarantine’ and put Sybil’s email into Bobs Junk mail folder. Gmail can then report to Alice that it received an email from the server ‘203.0.113.37’ (example.net) spoofing example.com, and that it failed SPF and DKIM checks.

DMARC reports will come from each receiving server and a spammer can easily send thousands and thousands of emails in an instant. These reports can quickly build up. Dmarcian is a tool which will break down these report into usable graphs and summaries helping you understand them.

dmarcian

These reports show all sources of emails sent from the domain and if DKIM and/or SPF was used to authenticate them, providing real world information on how your domain’s emails are being received out on the internet. It shows if there are any unknown sources impersonating the domain. All of this helps to understand the domain’s email reputation and how deliverable emails are.

DMARC gives you information. Information is power.

Conclusion

Globally SPAM has been going down and it will continue to do so only if more email systems implement these technologies. These have been just one example of how SPF, DKIM and DMARC can protect a domain. There are many more implementations that have their own specific requirements. If you want to improve the deliverability or want more information on how your emails looks to others, these technologies will give you this advantage. Don’t let Sybil spoof you.

Let us build your perfect email infrastructure.

 

Icons in used in images made by Freepik from www.flaticon.com is licensed by CC BY 3.0.
Feature image made by Christian Barmala licensed CC BY 2.0.

BBC micro:bit Launched

Yesterday the new BBC micro:bit 2016 was launched. Over a million were given to Year 7 students in the UK for them to take home and keep – just in time for the Easter break! The tiny device can be plugged into a computer and programmed to do all sorts of cool stuff and is designed to teach children about basic coding, which is something that is applied way more than you realise in the real world  (a bit like Linux); coffee machines, cars….. its not just computers you know.

We first came across the new micro:bit at Over the Air 2015 when we instantly wanted one to play around with! The micro:bit is not commercially available as of yet… the BBC are considering selling it in the future which may help it be as popular as the Raspberry Pi.

Having hired apprentices we know that computer studies in schools haven’t really cut it for real world applications – our former apprentices were mainly self taught at home. Some of us at Dogsbody Technology are old enough to remember the first BBC micro in the 1980’s and how it changed our lives. Hopefully the BBC’s attempt at reviving their success will help the next generation of computer geeks.

Dogsbody Technology looks forward to working with and hopefully employing the future generation.

Adding an external harddrive to a Raspberry Pi

We believe the native place for a server is in a datacentre and as such don’t run any infrastructure from our office.  This caused us a small problem when testing new setups as we would sometimes need to build a new development server (locally in a virtual machine) and have to pull down all the ISO’s and packages needed.  We have a fast internet connection but we are also impatient.

We needed a way to mirror the latest releases of the popular Linux distributions that we use so they are easily accessible for when we need them in the office. We decided that the best way to do this was to setup a script mirroring these popular repositorys on a Raspberry PI that we use for various things in the office.

We had a spare 1TB external harddrive that seemed like the perfect fit, and as it’s always best to start projects with a clean slate we did a full shred of the disk to make sure it was completely clean. This was done with the following command:

sudo shred -n 1 -z /dev/sda &

This performs a pass on the disk filling it with random bytes and then a second pass that writes 0’s to the disk.

The next step was to create some partitions on our disk, one of these we were going to use as swap space for the Raspberry PI and the other for storage of all our ISO’s we were going to mirror. We used fdisk to partition the disk and created two primary partitions, the first partition was 1GB of swap (it’s always better to make your swap partition first as it will be created on the inner sectors of the disk where the disk spins faster). The second partition was for our storage and used the rest of our available disk space.

After partitioning our disk we need to format the partitions accordingly, our storage partition was /dev/sda2 so we formatted it as ext4 with:

sudo mkfs.ext4 /dev/sda2 -L storage

And then mounted it as /mnt/storage with the following commands:

sudo mkdir /mnt/storage
sudo mount /dev/sda2 /mnt/storage

We now had around 1TB of storage under /mnt/storage that we could use to store all of our ISOs for easy access over the office network meaning we always have the latest and greatest versions at our disposal.

Finally to setup our Raspberry PI swap partition and enable swapping on it we did the following:

sudo mkswap /dev/sda1
sudo swapon /dev/sda1

We will also need to disable the standard Raspberry PI swap that uses the SD card with the following commands:

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo chkconfig dphys-swapfile off

The command:

sudo swapon -s

Will now show our new 1GB swap partition on /dev/sda1.

Finally we need to add some lines to our fstab file so the partitions are automounted at boot. We edited /etc/fstab and added the following lines:

/dev/sda2 /mnt/storage ext4 defaults 0 2
/dev/sda1 none swap sw 0 0

Save that file and you should be all done! You can reboot to make sure that all of the partitions are automounted and that your Raspberry PI is now swapping on the new 1GB swap partition we made.

Radio Interview: Rural Broadband Speeds

As more and more people work from home, both full-time and just for overtime, home broadband speeds become more and more important.

We were invited to join a discussion on the BBC Surrey breakfast show to talk about broadband speeds in Surrey…

 

Storing data online (do you really want me to say “in the cloud”?) means that we need to be connected to the Internet to access it and amazingly there is still a large part of the population that doesn’t have that sort of access.

What’s your opinion?  Do you think this is a big concern?

 

Servers have reputations too

Many have compared the Internet to the wild west.  While there may well be cowboys it is certainly true that your server is viewed by the company that it keeps.  It seems that people spend a lot of time looking after their own online reputation and very little looking after the reputation of their infrastructure.

A standard part of our SysAdmin service is to setup reputation alerts for your domain names and IP addresses.  Last week one of the servers we manage for a client was added to a realtime blacklist (RBL).  Blacklists are used to identify places online that are unsafe.  This could be spammers, malware or porn. We were convinced that our customer had none of these issues so we contacted the blacklist to find that they had actually blacklisted an entire subnet which included our server.  They had noticed a large amount of spam coming from a number of machines around ours and had quite correctly wanted to blacklist them.  Our server was “collateral damage”.

Thankfully the blacklist in question removed our server very quickly but it does show that just like real life, all the hard work that goes into keeping your house safe and secure can be tarnished quite easily by having bad neighbours.

Things to remember when setting up infrastructure:
  • Your domain name has a reputation as well as your IP address.  You should monitor both.
  • A good free place to monitor IP addresses is Project Honey Pot.
  • A good free place to monitor web domain issues (http only) are Google Webmaster Tools.
  • If you are running a mailserver you should monitor realtime blocklists too.
  • The ISP that hosts your equipment is responsible for the neighbours you keep.
  • Server reputation can also include response time.  Ensuring a low latency connection is essential.
Reputation monitoring is an important part of our SysAdmin service.  We monitor your equipment so you don’t have to.

Feature image – “Storage Servers” by grover_net is licensed under CC BY ND 2.0

What’s in a name?

Cloud computing is often regarded as a horrible buzzword that is thrown around at every opportunity.  This may be true but it may also be better and easier than some of the alternatives. In this article we look at the differences between the three main types of cloud computing and why there is so much confusion.

SaaS – Software as a Service

Chances are you have been using SaaS for ages and not even known about it. Webmail anyone? SaaS allows you to use a program or software as a free (Gmail) or paid for (Salesforce.com) subscription service. Customers rely on the vendor to maintain and update the product for them saving the time and energy required to setup and run these services themselves in house. Google is really running with this concept from calendaring, word processing and even mapping being possible from any web browser. Having software run externally also allows for very easy roaming as any user can access their data from anywhere in the world.

PaaS – Platform as a Service

The PaaS layer offers savings for both the customer and the developer but at the cost of functionality and control. Examples of PaaS are Google’s App Engine and the Force.com platform.  The PaaS supplier provides a standard programming environment, usually with API’s that allow for easily utilising certain off the shelf tools such as redundant storage and databases.  Developers can quickly create tools and products that can be sold with all the advantages of SaaS services without having to get their hands dirty building fully secure and redundant systems from scratch.

IaaS – Infrastructure as a Service

Purists say that IaaS is the only one that deserves to be called Cloud Computing. For years companies have purchased or rented servers in data centres to run their applications. While it was great to have your own box it was up to you to make it robust and redundant enough to cope with everything the Internet throws at you. IaaS providers such as Rackspace and ElasticHosts virtualise their data centres and sell virtual servers with the same power as the physical server you had but with the added benefit of redundancy and a large cost saving. Because virtual machines can be turned on and off at will and with most providers billing by the hour or minute it is very easy to cope with peaks in demand. Instead of using one server to process some data over 20 hours you can use 20 servers and have your answer in one hour.

There are always exceptions

Of course no labelling would be complete without some blurring of the lines. Amazon have successfully managed to confuse things with their very popular AWS products. While their S3 service is a PaaS product, their EC2 service is sold and commonly referred to as a IaaS product.  However there are a number of proprietary tools and calls that you must use which many argue makes it a PaaS product too.

Whatever your views (and there are many), Dogsbody Technology can help you understand what is right for you and your business.  If you have any questions regarding this post or suggestions for articles on more subjects then please do comment below or drop us a line.

Mobile websites

Just as half the internet is still trying to get online it seems that the other half has decided that computers are old and mobile is the future. As always, it’s a compromise.  Most companies do still need a “base” website but mobile is definitely growing and by having a dedicated mobile presence you can keep customers from forgetting about you when they aren’t tied to a computer.

What is a mobile device?

You may think this is a strange question but statistics vary hugely depending on the demographic of visitors to your site.  From a truly worldwide perspective Nokia still has a very large market share.  Continents such as Africa absorb most of their web browsing in a mobile form where Nokia has a monopoly and few people have desktop computers.

It’s a very good idea to look at your website statistics to find out the type of visitors that your site is getting.  Technology sites such as this one are mainly visited with newer smartphones which can deal with more complex webpages and which makes things a lot easier.

Finally, lets not forget the new range of mobile devices, the tablets!  Is an iPad a mobile device?  Either way, if you are designing for mobile, you should design for tablets too.

Ways to go mobile

There are many ways to go mobile.  If you have the budget you can design a customised site although most sites are built via a CMS such as WordPress or Joomla.  For WordPress we have found that the WPtouch plugin provides a very nice mobile version of your site if most of your visitors are using newer smartphones and the WordPress Mobile Pack plugin provides a mobile version of your site that works with the most variety of phones.  There are similar plugins for Joomla and other CMS tools which we can help you with if you have any questions.

Dogsbody Hosting also provides a mobile webpage service that allows you to create a mobile website easily without any coding experience needed.  Create a quick site to publish your companies opening hours or offer discount vouchers to people reading your posters in the street.  The possibilities are endless.

If you have made a mobile site using the tools we have told you about here then please let us know in the comments.

It doesn’t always have to be .com

In some ways the internet is very crowded (100 million active websites) and in other ways very quiet (most web traffic is caused by few sites) but it’s important to remember that the internet is pretty much limitless and there is more than one way to claim your acre of land online.

When first creating a website it’s easy to instantly think that you need a .com web address or that you are just in the UK so can go with a cheaper .uk address.  There is nothing wrong with that but with .com being the busiest of all the top level domains (TLDs) and .uk now being the 2nd largest country code top-level domain (ccTLD) it’s quite hard to get the name you want.

Two is better than one

It’s always a good idea to own more than one domain.  You may only need one domain to run your site but you have to ask yourself how you or your company would feel if you owned the .com address and then someone else went and bought the .co.uk?  Remember that they can use the domain for whatever they like which may be something you don’t want associated with your name and brand.

Having multiple domains can act as a backup too, if you can receive email on more than one address then should something happen it can allow you to carry on running, at least while things are sorted out.  And before you ask what can happen to domain names that you own… they can be stolen/hijacked, taken offline by authorities or even forget to be renewed… although we would never do that!

Another advantage to owning several domains, they can be utilised for different areas of your business.  .tv domains are a great example of this allowing it to point to video content.  Instead of hiding tutorials or a video blog deep on your website or on YouTube why not point your customers straight there.

Speciality domains

Speciality domains are domain extensions with a specific meaning that is well known to Internet users.  These domain extensions provide a clue to visitors that tells them the type of content to expect or what form that content will take.

.me – Not many think about Montenegro when they see a .me domain.  The use is obvious, .me means it’s about me.  Visitors to the site will expect content about the person who owns the domain.  Recommended for blogs, a CV or resume, photo sharing, anything personal.

.tv – “If you have a play button on your website you should have a .tv”.  Visitors to a .tv site know what to expect, video content.  Recommended for tutorials, family video sharing, screencasts, live streams & YouTube channels.

.mobi – Specifically designed to host mobile content.  Most CMS driven sites such as WordPress and Joomla can seamlessly offer a mobile formatted page and it’s a good way to advertise this to mobile visitors.

.co – A great alternative to the .com TLD.  Also has additional meanings such as co-op.

Other domains

You don’t always need a speciality domain, there are the standard ones too.

.com – The original and still has its place.  Recognized globally as the biggest.

.net – You can’t get more tech than a .net domain. Perfect for Internet companies or groups that work in the online world.  Many companies also have a .net version of their domain which is used by the IT group to name and manage machines used by the company, this separates the IT “tinkering” from the marketing “sales”.

.org – Always intended to be used for personal sites .org has now lost that moniker and is deemed by some as just for non-profits.  Some companies like to have a commercial presence on .com and their organisational details on .org.

.info – Of the seven new TLDs introduced in 2000 .info has become the most successful.  “Info” is a recognized term in over 30 languages which makes this TLD a truly global domain.

.biz – A popular alternative for companies whose business name is already taken as a .com.

.eu – For a UK company branching out from our small island it can get expensive to start registering and promoting domains in every country.  The .eu allows for a pan European look.

.uk – Not to be belittled.  You may be a sports club or local group, even if you do leave the UK you still want to promote yourself as a UK team or company.  What could be better than having it in your name.

Whatever you choose you can see that there are plenty of options.  Perhaps Dogsbody Technology or Dogsbody Hosting can help you with your decision.

Let us know what you think too.  There are many many more TLD and ccTLDs, which ones do you like?