Replacement Server Monitoring – Part 3: Kapacitor alerts and going live!

So far in this series of blog posts we’ve discussed picking a replacement monitoring solution and getting it up and running. This instalment will cover setting up the actual alerting rules for our customers’ servers, and going live with the new solution.

Kapacitor Alerts

As mentioned in previous posts, the portion of the TICK stack responsible for the actual alerting is Kapacitor. Put simply, Kapacitor takes metrics stored in the InfluxDB database, processes and transforms them, and then sends alerts based on configured thresholds. It can deal with both batches and streams of data, the difference is fairly clear from the names; batch data takes multiple data points as an input and looks at them as a whole. Streams accept a single point at a time, folding each new point into the mix and re-evaluating thresholds each time.

As we wanted to monitor servers constantly over large time periods, stream data was the obvious choice for our alerts.

We went through many iterations of out alerting scripts, known as TICK scripts, before mostly settling on what we have now. I’ll explain one of our “Critical” CPU alert scripts to show how things work (comments inline):

var critLevel = 80 // The CPU percentage we want to alert on
var critTime = 15 // How long the CPU percentage must be at the critLevel (in this case, 80) percentage before we alert
var critResetTime = 15 // How long the CPU percentage must be back below the critLevel (again, 80) before we reset the alert

stream // Tell Kapacitor that this alert is using stream data
    |from()
        .measurement('cpu') // Tell Kapacitor to look at the CPU data
    |where(lambda: ("host" == '$reported_hostname') AND ("cpu" == 'cpu-total')) // Only look at the data for a particular server (more on this below)
    |groupBy('host')
    |eval(lambda: 100.0 - "usage_idle") // Calculate percentage of CPU used...
      .as('cpu_used') // ... and save this value in it's own variable
    |stateDuration(lambda: "cpu_used" >= critLevel) // Keep track of how long CPU percentage has been above the alerting threshold
        .unit(1m) // Minutely resolution is enough for us, so we use minutes for our units
        .as('crit_duration') // Store the number calculated above for later user
    |stateDuration(lambda: "cpu_used" < critLevel) // The same as the above 3 lines, but for resetting the alert status 
        .unit(1m) .as('crit_reset_duration') 
    |alert() // Create an alert... 
        .id('CPU - {{ index .Tags "host" }}') // The alert title 
        .message('{{.Level}} - CPU Usage > ' + string(critLevel) + ' on {{ index .Tags "host" }}') // The information contained in the alert
        .details('''
        {{ .ID }}
        {{ .Message }}
        ''')
        .crit(lambda: "crit_duration" >= critTime) // Generate a critical alert when CPU percentage has been above the threshold for the specified amount of time
        .critReset(lambda: "crit_reset_duration" >= critResetTime) // Reset the alert when CPU percentage has come back below the threshold for the right time
        .stateChangesOnly() // Only send out information when an alert changes from normal to critical, or back again
        .log('/var/log/kapacitor/kapacitor_alerts.log') // Record in a log file that this alert was generated / reset
        .email() // Send the alert via email 
        |influxDBOut() // Write the alert data back into InfluxDB for later reference...
             .measurement('kapacitor_alerts') // The name to store the data under
             .tag('kapacitor_alert_level', 'critical') // Information on the alert
.tag('metric', 'cpu') // The type of alert that was generated

The above TICK script generates a “Critcal” level alert when the CPU usage on a given server has been above 80% for 15 minutes or more. Once it has alerted, the alert will not reset until the CPU usage has come back down below 80% for a further 15 minutes. Both the initial notification and the “close” notification are sent via email.

The vast majority of our TICK scripts are very similar to the above, with changes to monitor different metrics (memory, disk space, disk IO etc) with different threshold levels and times etc.

To load this TICK script into Kapacitor, we use the kapacitor command line interface. Here’s what we’d run:

kapacitor define example_server_cpu -type stream -tick cpu.tick -dbrp example_server.autogen
kapacitor enable example_server_cpu

This creates a Kapacitor alert with the name “example_server_cpu”, with the “stream” alert type, against a database and retention policy we specify.

In reality, we automate this process with another script. This also replaces the $reported_hostname slug with the actual hostname of the server we’re setting the alert up for.

Getting customer servers reporting

Now that we could actually alert on information coming into InfluxDB, it was time to get each of our customers’ servers reporting in. Since we have a large number of customer systems to monitor, installing and configuring Telegraf by hand was simply not an option. We used ansible to roll the configuration out to the servers that needed it which involved 12 different operating systems and 4 different configurations.

Here’s a list of the tasks that Ansible carries out for us:

  • On our servers:
    • Create a specific InfluxDB database for the customers server
    • Create a locked down InfluxDB write only user for the server to send it’s data in with
    • Add Grafana data source to link the database to the customer
  • On the customers server:
    • Setup the Telegraf repo to ensure it is updated
    • Install Telegraf
    • Configure Telegraf outputs to point to our endpoint with the correct server specific credentials
    • Configure Telegraf inputs with all the metrics we want to capture
    • Restart Telegraf to load the new configuration

The above should be pretty self-explanatory. Whilst every one of the above steps would be carried out for a new server, we wrote the Ansible files to allow for most of them to be run independently of one another. This means that in future we’d be able to, for example, include another input to report metrics on, with relative ease.

For those of you not familiar with Ansible, here’s an excerpt from one of the files. It places a Telegraf config file into the relevant directory on the server, and sets the file permissions to the values we want:

---
- name: Copy inputs config onto client
  copy:
    src: ../files/telegraf/telegraf_inputs.conf
    dest: /etc/telegraf/telegraf.d/telegraf_inputs.conf
    owner: root
    group: root
    mode: 0644
become: yes

 

With the use of more ansible we incorporated various different tasks into a single repository structure, did lots of testing, and then ran things against our customers’ servers. Shortly after, we had all of our customers’ servers reporting in. After making sure everything looked right, we created and enabled various alerts for each server. The process for this was to write a BASH script which looped over a list of our customers’ servers and the available alert scripts, and combined them so that we had alerts for the key metrics across all servers. The floodgates had been opened!

Summary

So, at the end of everything covered in the posts in this series, we had ourselves a very respectable New Relic replacement. We ran the two systems side by side for a few weeks and are very happy with the outcome.  While what we have described here is a basic guide to setting the system up we have already started to make improvements way beyond the power we used to have.  If any of them are exciting enough, there will be more blog posts coming your way, so make sure you come back soon.

We’re also hoping to open source all of our TICK scripts, ansible configs, and various other snippets used to tie everything together at some point, once they’ve been tidied up and improved a bit more. If you cannot wait that long and need them now, drop us a line and we’ll do our best to help you out.

I hope you’ve enjoyed this series. It was great of a project that the whole company took part in and that enabled us to provide an even better experience for our customers. Thanks for reading!

Feature image background by swadley licensed CC BY 2.0.

Replacement Server Monitoring – Part 2: Building the replacement

This is part two of a three part series of blog posts about picking a replacement monitoring solution, getting it running and ready, and finally moving our customers over to it.

In our last post we discussed our need for a replacement monitoring system and our pick for the software stack we were going to build it on. If you haven’t already, you should go and read that before continuing with this blog post.

This post aims to detail the set up and configuration of the different components to work together, along with some additional customisations we made to get the functionality we wanted.

Component Installation

As mentioned in the previous entry in this series, InfluxData, the TICK stack creators, provide package repositories where pre-built and ready to use packages are available. This eliminates the need for configuration and compilation of source code before we can use it. This allows us to install and run software with the use of a few commands with very predictable results, as opposed to often many commands needed for compilation, with sometimes wildly varying results. Great stuff.

All components are available from the same repository. Here’s how you install them (example shown is for an Ubuntu 16.04 “Xenial” system

curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo apt-get update && sudo apt-get install influxdb
sudo systemctl start influxdb

The above steps are also identical for the other components, Telegraf, Chronograf and Kapacitor. You’ll just need to replace “influxdb” with the correct name in lines 4 and 5.

Configuring and linking the components

As each of the components are created by the same people, InfluxData, linking them together is fortunately very easy (another reason we went with the TICK stack). I’ll show you what additional configuration was put in place for the components and how we then linked together. Note that the components are out of order here, as the configuration of some components is a prerequisite to linking them to another.

InfluxDB

The main change that we make to InfluxDB is to have it listen for connections over HTTPS, meaning any data flowing to/from it will be encrypted. (To do this, you will need to have an SSL certificate and key pair to use. Obtaining that cert/key pair is outside the scope of the blog post). We also require authentication for logins, and disable the query log. We then restart InfluxDB for these changes to take effect.

sudo vim /etc/influx/influx.conf

[http]
    enabled = true
    bind-address = "0.0.0.0:8086"
    auth-enabled = true
    log-enabled = false
    https-enabled = true
    https-certificate = "/etc/influxdb/ssl/reporting-endpoint.dogsbodytecnhology.com.pem"

sudo systemctl restart influxd

Note that the path used for the “https-certificate” parameter will need to exist on your system of course.

We then need to create an administrative user like so:

influx -ssl -host ivory.dogsbodyhosting.net
> CREATE USER admin WITH PASSWORD 'superstrongpassword' WITH ALL PRIVILEGES

Telegraf

The customisations for Telegraf involve telling it where to reports its metrics to, and what metrics to record. We have an automated process, using ansible for rolling these customisations out to customer servers, which we’ll cover in the next part of this series. Make sure you check back for that. These are essentially what changes are made:

sudo vim /etc/telegraf.d/outputs.conf

[[outputs.influxdb]]
  urls = ["https://reporting-endpoint.dogsbodytechnology.com:8086"]
  database = "3340ad1c-31ac-11e8-bfaf-5ba54621292f"
  username = "3340ad1c-31ac-11e8-bfaf-5ba54621292f"
  password = "supersecurepassword"
  retention_policy = ""
  write_consistency = "any"
  timeout = "5s"

The above dictates that Telegraf should connect securely over HTTPS and tells it the username, database and password to use for it’s connection.

We also need to tell Telegraf what metrics it should record. This is configured like so:

[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = true
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.diskio]]
[[inputs.net]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
[[inputs.procstat]]
  pattern = "."

The above tells Telegraf what metrics to report, and customises how they are reported a little. For example, we tell it to ignore some pseudo-filesystems in the disk section, as these aren’t important to us.

Kapacitor

The customisations for Kapacitor primarily tell it which InfluxDB instance it should use, and the channels it should use for sending out alerts:

sudo vim /etc/kapacitor/kapacitor.conf
    [http]
    log-enabled = false
    
    [logging]
    level = “WARN”

    [[influxdb]]
    name = "ivory.dogsbodyhosting.net"
    urls = ["https://reporting-endpoint.dogsbodytechnology.com:8086"]
    username = admin
    password = “supersecurepassword”

    [pushover]
    enabled = true
    token = “yourpushovertoken”
    user-key = “yourpushoveruserkey”

    [smtp]
    enabled = true
    host = "localhost"
    port = 25
    username = ""
    password = ""
    from = "alerts@dogsbodytechnology.com"
    to = ["sysadmin@dogsbodytechnology.com"]

As you can probably work out, we use Pushover and email to send/receive our alert messages. This is subject to change over time. During the development phase, I used the Slack output.

Chronograf Grafana

Although the TICK stack offers it’s own visualisation (and control) tool, Chronograf, we ended up using the very popular Grafana instead. At the time when we were building the replacement solution, Chronograf, although very pretty, was somewhat lacking in features, and the features that did exist were sometimes buggy. Please do note that Chronograf was the only component that was still in beta at this period in time. It’s now had a full release and another ~5 months of development. You should definitely try it out for yourself before jumping straight to Grafana. We intend to re-evaluate Chronograf ourselves soon, especially as it is able to control the other components in the TICK stack, something which Grafana does not offer at all.

The Grafana install is pretty straightforward, as it also has a package repository:

sudo vim /etc/apt/sources.list.d/grafana.list
    deb https://packagecloud.io/grafana/stable/debian/ jessie main
sudo apt update
sudo apt install grafana

We then of course make some customisations. The important part here is setting the base URL which is required due to the fact we’ve got Grafana running behind an nginx reverse proxy. (We love nginx and use it wherever we get the chance. We won’t detail the customisations here though as they’re not strictly related to the monitoring solution, and Grafana works just fine on it’s own.)

sudo vim /etc/grafana/grafana.ini
    [server]
    domain = display-endpoint.dogsbodytechnology
    root_url = %(protocol)s://%(domain)s:/grafana
sudo systemctl restart grafana

Summary

The steps above left us with a very powerful and customisable monitoring solution, which worked fantastically for us. Be sure to check back for future instalments in this series. We’ll cover setting up alerts with Kapacitor, creating awesome visualisations with Grafana, and getting all of our hundreds of customers’ servers reporting in and alerting.

Feature image background by tomandellystravels licensed CC BY 2.0.

Replacement Server Monitoring – Part 1: Picking a Replacement

As a company primarily dealing with Linux servers and keeping them online constantly, here at Dogsbody we take a huge interest in the current status of any and all servers we’re responsible for. Having accurate and up to date information allows us to move proactively and remedy potential problems before they became service-impacting for our customers.

For many years, and as long as I have worked at the company, we’d used an offering from New Relic, called simply “Servers”. In 2017, New Relic announced that they would be discontinuing their “Servers” offering, with their “Infrastructure” product taking it’s place. The pricing for New Relic infrastructure was exorbitant for our use case, and there were a few things we wanted from our monitoring solution that New Relic didn’t offer, so being the tinkerers that we are, we decided to implement our own.

This is a 3 part series of blog posts about picking a replacement monitoring solution, getting it running and ready, and finally moving our customers over to it.

What we needed from our new solution

The phase one objective for this project was rather simple: to replicate the core functionality offered by New Relic. This meant that the following items were considered crucial:

  • Configurable alert policies – All servers are different. Being able to tweak the thresholds for alerts depending on the server was very important to us. Nobody likes false alarms, especially not in the middle of the night!
  • Historical data – Being able to view system metrics at a given timestamp is of huge help when investigating problems that have occurred in the past
  • Easy to install and lightweight server-side software – As we’d be needing to install the monitoring tool on hundreds of servers, some with very low resources, we needed to ensure that this was a breeze to configure and as slim as possible
  • Webhook support for alerts – Our alerting process is built around having alerts from various different monitoring tools report to a single endpoint where we handle the alerting with custom logic. Flexibility in ours alerts was a must-have

Solutions we considered

A quick Google for “linux server monitoring” returns a lot of results. The first round of investigations essentially consisted of checking out the ones we’d heard about and reading up on what they had to offer. Anything of note got recorded for later reference, including any solutions that we knew would not be suitable for whatever reason. It didn’t take very long for a short list of “big players” to present themselves. Now, this is not to say that we discounted any solutions on the account of them being small, but we did want a solution that was gonna be stable and widely supported from the get-go. We wanted to get on with using the software, instead of spending time getting it to install/run.

The big names were Nagios, Zabbix, Prometheus, and Influx (TICK).

After much reading of the available documentation, performing some test installations (some successful, some very much not), and having a general play with each of them, I decided to look further at the TICK stack from InfluxData. I wont go too much into the negatives of the failed candidates, but the main points across them were:

  • Complex installation and/or management of central server
  • Poor / convoluted documentation
  • Lack of repositories for agent installation

Influx (TICK)

The monitoring solution offered by Influx consists of 4 parts, each of which can be installed independently of one another

TTelegraf – Agent for collecting and reporting system metrics

IInfluxDB – Database to store metrics

CChronograf – Management and graphing interface for the rest of the stack

KKapacitor – Data processing and alerting engine

 

Package repositories existed for all parts of the stack, most importantly for Telegraf which would be going on customer systems. This allowed for easy installation, updating, and removal of any of the components.

One of the biggest advantages for InfluxDB was the very simple installation: add the repo, install the package, start the software. At this point Influx was ready to accept metrics reported from a server running Telegraf (or anything else for that matter. There were many clients that support reporting to InfluxDB, which was another positive)

In the same vein, the Telegraf installation was also very easy, using the same steps as above, with the additional step of updating the config to tell the software where to report it’s metrics too. This is a one-line change in the config, followed by a quick restart of the software.

At this point we had basically all of the system information we could ever need, in an easy to access format, only a few seconds after things happen. Awesome.

Although the most important functionality to replicate was the alerting, the next thing we installed and focused on was the visualisation of the data Telegraf was reporting into InfluxDB. We needed to ensure the data we were receiving mirrored what we were seeing in New Relic, and it can also be tricky to create test alerts when you have no visibility of the data you’re alerting against too, so we needed some graphs (everyone loves pretty graphs as well of course!)

As mentioned above, Chronograf is the component of the TICK stack responsible for data visualisation, and also allows you to interface with InfluxDB and Kapacitor, to run queries and create alerts, respectively.

In summary, the TICK stack offered us an open source, modular and easy to use system. It felt pleasant to use, the documentation was reasonable, and the system seemed very stable. We had a great base, one which we could design and build our new server monitoring system. Exciting!

Be sure to check back for the next part in the series which covers designing and building the real thing.

Feature image background by xmodulo licensed CC BY 2.0.

Congratulations Jim Carter

We are very pleased to congratulate Jim Carter on completing his Linux Systems Apprenticeship.

Jim now holds…

City & Guilds Certificate in IT Systems and Principles
City & Guilds Level 3 Diploma in IT Professional Competence

We are even more pleased that Jim has chosen to continue his career with Dogsbody Technology as a permanent member of staff. Jim is now looking forward to continuing his education with professional qualifications from Amazon Web Services (AWS) and honing his coding skills (we will break his habit of using Python “for” loops for everything).

If you are interested in joining the Dogsbody Technology team as a Linux Systems Apprentice please apply here.

 

2017 Christmas Shutdown

Christmas is fast approaching and we wanted to let you know that Dogsbody Technology will be taking some time off to celebrate the festive season.

Our office will be closed on the following days and response to emails maybe slower:
– Fri 15 Dec 2017 – Office Closed after 13:00
– Fri 22 Dec 2017 – Office Closed after 13:00
– Mon 25 Dec 2017 – Public Holiday
– Tue 26 Dec 2017 – Public Holiday
– Wed 27 Dec 2017 – Office Closed
– Thu 28 Dec 2017 – Office Closed
– Fri 29 Dec 2017 – Office Closed
– Mon 1 Jan 2018 – Public Holiday

During this time any issues will be dealt with on an emergency out of hours basis and we will only be able to support customers who are experiencing a situation where business cannot function without a resolution such as:
– Website / Server down
– Inability to trade online

We will continue monitoring and patching servers as usual over the Christmas period.

We recommend our hosting customers check our status site as usual which will continue to be updated.

If you need to raise an issue please use the standard contact details as calls to our office and support emails will be routed to the engineer on call.

Thank you for your continued support throughout 2017. We hope you have a very Merry Christmas and a Happy New Year.

#FoodbankAdvent 2017 – The Reverse Advent Calendar

As Foodbank demand soars in the UK Dogsbody Technology are proud to announce that this year instead of further charity giving or sending corporate gifts we will be taking part in the Reverse Advent Calendar #FoodbankAdvent by donating essential items to our local Farnborough Trussell Trust Foodbank.

December is the busiest month of the year for foodbanks, with 45% more referrals during the two weeks before Christmas. More than 90% of the food donations comes from the public.

The Trussell Trust runs the largest network of foodbanks in the UK, giving emergency food and support to people in crisis. Thirteen million people live below the poverty line and in the last year they gave 1,182,954 three-day emergency food supplies to people in crisis – up over 6% from the previous year. Of this number 436,938 went to children.

What is a Reverse Advent Calendar?

Traditionally advent calenders are something you open each day from the 1st to the 25th December and get a reward. It used to be a tiny picture or a chocolate, now adults can indulge themselves with everything from make up to alcohol!.

Instead with the Reverse Advent Calendar you start with nothing (empty box) and put one item for your local food bank into it every day. You could do this for 25 days to mirror the advent calendar – or perhaps for a whole month.

Follow us on social media to see our Advent grow!

Throughout December we will be posting updates on how our calender is growing and our aim is to donate over 60+ items in total.

Want to take part too? Its not to late to start!

  • Read Trussell’s Trust’s What goes in a food bank parcel and non food items to see the items they like to be donated
  • Find your local Trussell Trust food bank or donate to others in your area.
  • Look at your local food bank’s website for the list of items they urgently need.
  • Donate long life items (tinned or dried) so as not to waste fresh food (which goes off quickly)
  • Get your donation to the food bank by early December for them to be useful for Christmas
  • If you want to collect an item a day as per the advent calender, then your donations will be just as welcome in January as winter sets in!

In one of the richest countries in the world, no one should be hungry at any time of year but especially not at Christmas. Dogsbody Technology hope to make a small difference to someone this Christmas.

Update: Dogsbody Technology donated 74 items to Farnborough Trussell Trust Foodbank #FoodbankAdvent weighing in at 37.1 Kg!

It was great to give something back to our community and the Trussell Trust Foodbank were very pleased with our donation – it is definitely something we will repeat!

A big thank you to all our customers and suppliers who have sent us season greetings, cards and presents – Dogsbody Technology are pleased to have such awesome allies :-)

 

We wish all our customers, suppliers and supporters a very Merry Christmas and a Happy 2018.

 

Dogsbody Technology are moving

After four years at Ferneberga House we are moving 3 miles around Farnborough airport to Cody Technology Park.

After 1st July 2017 our address will be …

Dogsbody Technology Ltd.
Cody Technology Park
Ively Road
Farnborough
Hampshire
GU14 0LX

Please update any record you have.  All other contact details (email addresses, phone numbers) will remain the same.

Cody Technology Park is a secure List X site which means that all visitors will need photo ID and a security check to get on-site.  As you can imagine we are very happy to add this extra level of protection to the layers of security we already have in place.

We will be moving over the weekend of Friday 30th June – Monday 3rd July 2017.  We aren’t expecting any delays when dealing with customers but please bear with us if we take a little longer to respond during this time.

We can’t wait to share pictures of our space in the future, watch this blog for info on some of the tech we will be installing :-)

Dogsbody walks for Cystic Fibrosis

Last Saturday 10th June 2017 The Techy Trekkers (8 employees from Dogsbody Technology and Adapt Digital) walked OVER 40 miles taking part in the Great Strides 65 Surrey Hills Team Challenge in aid of the Cystic Fibrosis Trust. It was without a doubt the biggest physical challenge any of us had faced and was indeed a challenge.

Our Team was in the final wave which started at 7.30AM. There were 12 Checkpoints, 7 of which involved meeting with the wonderful team in our support car, who carried all the heavy stuff like food, drink and medical supplies! 40 Miles at the average walking pace of 3 MPH would take us 13 hours and 20 minutes with no stops. Our actual moving time (according to our GPS) was 13 hours and 49 minutes, which wasn’t so bad – it was the support stops that slowly got longer as we got more tired, needed more time to eat, tend to feet and queue for the loo.

We ended up completing the event at 1AM to an amazing cheer from the organisers who were brilliant on the day; it may have taken us 2 hours longer than we planned for but all 8 of us finished and we are immensely proud of the team for continuing despite the blisters and pain.

Our team of truly amazing people have raised over £2,300 in sponsorship so far but the whole event currently stands at a fundraising total (inclusive of Gift Aid) of £200,233.36 for the walk and a further £11,111.61 for the ultra (running race)  – a massive amount which will help the Cystic Fibrosis Trust in its mission to ensure that everyone born with cystic fibrosis can live a Life Unlimited.

There is still one month to donate to such an amazing cause so please spare some pennies if you can do so we can reach our personal target of £2500.00.

“I completed the hardest physical and mental challenge of my life with a team of amazing people!” – Teammate Katie

Images courtesy of Jan Benton, Tracey Clarkson & Mark Turner.

Dogsbody Technology and Adapt Digital take on Great Strides 65

Employees from Dogsbody Technology and Adapt Digital are dusting off their walking boots for The Cystic Fibrosis Trust and taking part in the Great Strides 65 South – Surrey Hills – A 40-mile team walking challenge through Surrey countryside taking in such delights as the North Downs Way, Greensand Way, Devil’s Punchbowl and St Martha’s Hill. We need to complete the route within 17 hours.

If your feeling generous or feel like supporting our inevitable pain (it’s more than a marathon ;-) but walking….) then please do sponsor us. Dogsbody Technology have already matched their employees donations up to a maximum of £500 – meaning £500 has become £1000 :-)

The 8 Techy Trekkers shun natural daylight preferring the warming glow of computer screens. Needless to say… We aren’t sure if we can do this either!

We have had 4 months to train for walking 40 miles in one day – Saturday 10th June 2017.  10,000 steps would be almost 5 miles for the average adult….we need to do 8 times that!!!

The 40 Mile Route

Why are we doing this? To support and raise fund for The Cystic Fibrosis Trust – an organisation who believe that the day when people with CF can live a life unlimited by their condition is within our reach.

6 of the 8 ‘Techy Trekkers’

What is CF?

Cystic fibrosis (CF) is a life-limiting inherited generic condition caused by a faulty gene that controls the movement of salt and water in and out of cells. It affects more than 10,800 people in the UK. You are born with cystic fibrosis and cannot catch it later in life, but more than 2.5 million people in the UK carry the faulty gene, around one in 25 of us – most without knowing.

To have CF, you need to have inherited two faulty copies of the gene (one from each parent), and as there are many different gene mutations that cause cystic fibrosis, each person with the condition can have very different symptoms depending on the two genes they carry. While people with CF often look healthy on the outside, each individual is battling their own range of symptoms on a daily basis – this is why it is often called one of the invisible diseases.

There is currently no cure for cystic fibrosis but many treatments are available to manage it, including physiotherapy, exercise, medication and nutrition.

Each week five babies are born with cystic fibrosis, and two people die.

So please sponsor us and help us make a difference for this great cause.

Your website on the Tor network

We are very proud to announce a unique service which enables any website to have a presence on the Tor network.

Hosting a website on the Tor network has previously been very challenging, requiring changes to both infrastructure and the site itself.

Dogsbody Technology Ltd. are now offering a turn-key solution to this problem, allowing almost any website to be placed on the Tor network without requiring expensive redevelopment.

We are launching this service based upon the work done by Alec Muffett on the The Enterprise Onion Toolkit.  Alec is a stout maintainer of digital rights in the UK and is a member of the board of directors of the Open Rights Group.  He led the team that implemented Facebook’s onion site and we are delighted to have Alec as a consultant on this project.

Tor enables people from anywhere on the planet to access sites and data that might otherwise be blocked.  They may use Tor to browse sites and content without divulging their physical or online location, and Tor’s Onion Networking provides people with greater confidence regarding the authenticity of the site they are accessing

Tor users may already access your website anonymously and securely thanks to the Tor Network, however this does not protect their traffic from bad actors on the public internet or at rogue “exit nodes”.  Providing a Tor “Onion Address” helps obviate these risks by giving people a trusted path to your site, a path that is owned by you. Onion addresses are “self-authenticating addresses“: the address itself is a cryptographic proof of the identity of the service.  For example the Tor project website can also be found at http://expyuzz4wqqyqhjn.onion/ [NB: this link will only work in Tor-enabled browsers].

Adding an Onion address to your site gives a number of advantages:

  • Extra privacy for people that visit your site
  • Extra confidence that the site they have connected to is yours
  • Freedom from oversight and network surveillance
  • Access to your site cannot be blocked by an ISP, company or state
  • Enhanced communications integrity & tamper-proofing
  • Guaranteed use of a more security-aware browser, Tor Browser

Additionally: setting up an Onion address can provide less contention, more speed & more bandwidth to people accessing your site than they would get via Tor “exit nodes”.

Contact us now using the form below and we will set you up with a free demo of your site as an onion site on the Tor network.

Name (required)

Email (required)

Phone Number

Site