Uncategorised

How to plant it forward?

27th August 2019

Note: this article was last modified on October 1th, 2019.

If you have read Carbon Offsetting (part 1) and Carbon Offsetting (part 2), you are probably ready for some good news about the environmental impact of carbon offsetting. Let’s go back to the client who wanted to offset all carbon associated with one event or business activity, or – best case – all business / organizational activities, be it Green IT, production, travel, commuting etc.

Let’s assume that a certain carbon volume to offset came out of a calculation at one of the many available online carbon calculators. What can we do today? Is there a way to speed up the sequestration of CO2, while working with existing carbon offsetting schemes while sticking to the additionality approach? The answer is to this question is yes, but it would ask for a rather unconventional approach. Please bear with me.

In this blog I will focus on the possibility of direct tree planting as an alternative method to take the carbon associated with these business activities out of the air within just a few years after planting. And yes, it will be expensive but through the fact that you bypass the middlemen, it probably will not require more budget!

Basically four steps are involved, after which several other things will happen out of sight – but not out of mind and closely monitored by independent auditors.

Step 1: Estimate the (yearly) need in tons, using one of the carbon footprint calculators available online. Don’t hit the offset button yet.

Step 2: Connect with a direct tree planting project, and ask if they will provide you with a sequestration curve, showing how much carbon is sequestrated in the twenty years (or more) after planting. Such an S-curve usually looks something like this:

Photo credit: Union of Concerned Scientists.

If you have the carbon uptake split out per year, you can use the following spreadsheet (download .ods here), to see the compounded effect of your tree planting efforts, if you repeat this process year on year.

This way we have determined a sufficient number of trees that needs to be planted directly in order to offset a certain footprint within one, two or three years, all depending on your available budget.

Now we need to tackle two more problems before we’re there:

  • Who owns the carbon?
  • How can we provide funding for the replanting, monitoring, auditing for the next twenty years?

Regarding the first issue: if we want to create a simpler user experience, it seems imperative that all carbon that comes from the planting of the trees, is owned by the sponsor. But wait, we have just calculated with only the carbon for the first few years, what will happen to the other 90% of the carbon that those trees will mitigate during the remaining lifetime? Under the ‘plant-it-forward’ scheme, we will make sure that nobody else claims it, more about that later.

The second problem, about the funding of the certification of the carbon, can be tackled in two ways, one complicated and one very simple, depended on the economics of the area where you would like to let the planting happen. The main question is: is it necessary to change the socioeconomic context of the tree-planting area? If it is not, you can skip the next step (!), more about that in the sequel to this article (coming soon).

If livelihood projects are a necessity, by planting a certain number of extra trees (let’s say 20%), additional carbon credits can be generated that will be sold on the open market. The proceeds from these sales will be sufficient to monitor and maintain the planted plots on behalf of the sponsor.

Step 3: Finance the project, including the 20% extra trees. To get to the total number of trees in one easy calculation: multiply the number of necessary carbon credits with a factor 40*. Often these prices are in the € 1 per tree range, which would bring the ‘price’ around € 48 per ton of carbon (the 20% extra included) – not too bad actually!

Step 4: Do the same thing next year, and pretty soon you will start erasing your own historical footprint, instead of ‘just’ compensating for today.

Why is that necessary? The world is still speeding up (!) with the burning of more fossils and more deforestation, leading to the rise of the number or CO2 particles in the air. The challenge of our lifetime is to stop this acceleration and finally begin the descend towards an amount below 400 ppm – who knows when. This will require a tremendous amount of effort from our side in which compensating the effect of one activity is by itself not going to make a huge impact, I am afraid.

With this simple ‘plant-it-forward’ scheme, several things will be accomplished:

First, we can safely claim that all of the carbon generated by a certain business activity will be mitigated within 1-3 years from now, by your ‘own’ forest that will clearly be marked on public forest explorers (meaning: no one else can claim having financed that plot of forest).

Secondly, by financing this large-scale planting approach, in the end you will mitigate at least ten times as much carbon as you needed in the first place. This extra carbon will not be sold on the open market nor can it be claimed on behalf of your organization, but it will be cancelled on behalf of the planet (meaning: no one else can claim the carbon, no double counting).

In the solution sketched above, the existing certification schemes and voluntary carbon markets will guarantee the necessary transparency. This means that this approach could be applied today.

Please note that in the age of freely available, up-to-date satellite images, a completely different approach to auditing could be used for countries where the livelihood aspect is not necessary to keep the forests alive, more about this soon in another post.

At The Green Web Foundation we’re eager to raise the bar for what it means to be a green hoster or datacenter, so let us know if you’re interested to plant-it-forward, or share your ideas to develop another kind of (dark) green energy. We might start our own forest again one day soon! 😉

Thanks for reading, please get in touch if you have any questions or remarks!

*) The real factor is the product of two variables: the number of years that you give yourself to absorb the carbon that you generated this year, and the carbon uptake of the trees within that time-frame in tons of CO2.

Example: say you would like to sequestrate 40 tons of carbon for your hosting company within two years. You contacted a mangrove project, and the provided curve makes clear that in the first year after planting, the trees will sequestrate 10 kilogram of carbon, and 15 kilogram in the second year, so in total 25 kilograms per tree during the first two growth years. The calculation now becomes very simple: divide 40.000 kilograms by 25 kilogram per tree, to get to the number of trees you need to plant (1600). So the calculation factor to get from tons to trees, would be 40 in this example.


Carbon offsetting (part 2)

22nd August 2019

The client on the other side of the table is clearly surprised by my remark. “But these are our trees, we’ve paid for them, how come we do not own the carbon rights?” The account manager from the carbon credit provider takes the lead, and tries to approach the issue differently: “Would you be willing to pay extra for the maintenance, replanting, auditing & certification? It will roughly triple the price per tree, and it will lead to the delivery of carbon credits to your company over the course of the next 20 years.”

The client looks at her colleague, and together they try to explain again: they will launch a big event in November, setting out exactly what their company will do to make themselves the green player in their niche. How can they neutralize the carbon footprint that will be associated with it, since it is nearly impossible for them to operate internationally without relying on lots of IT-facilities, road transport and air traffic? Planting trees felt like the way to go, but now they understand that they will not own the carbon, or, if they are willing to pay triple, it will take twenty years before they have reaped all the benefits from it. They’re not so sure anymore. The carbon offset account manager sighs, and offers them existing carbon credits instead: “We have a good batch available from a project that already started in 2011, that way you can offset directly without waiting 20 years.”

The example above is an illustration of the current gap between the needs of clients in a world where climate discussions are heating up quickly, and auditors and resellers who are rooted in decades-old, methodological systems designed to be .. well precisely, thorough & slow on purpose.

Apart from tree-planting, there are many other sectors in which carbon offsetting schemes play a role. And this is a fast developing domain, so certain types of credits that are approved today, will most probably not be allowed in a few years time (there is a debate about hydro and wind generated credits for instance). But if we apply the client’s reasoning – how is this method going to slow down the buildup of CO2 in the air, or ideally even reverse it? – very few projects are left on the table.

The idea that the current climate neutrality schemes might be not sufficient to tackle the problems in the long run, is something that is often written about, but mostly without a clear direction of what could work.

Recent news from The Guardian shows some of the peculiarities of the current discussion, this time spiked by Elton John’s statement that he bought carbon offsets for the use of a private plane by the Duke and Duchess of Sussex:

The world of carbon offsetting flights – where you can pay to have the equivalent of your emissions ‘cancelled out’ by projects that lower or remove emissions, such as reforestation or renewable energy – is not clearcut. While some argue it is better than doing nothing, others say it allows frequent flyers to assuage their guilt and the aviation industry to grow.

“The idea that you can fly ‘carbon neutral’ is very misleading,” says Roger Tyers, a research fellow at the University of Southampton, who studies attitudes to offsetting and recently made a work trip to China by train. “A plane that flies today emits carbon today. It’s very hard to know how fast an offset can remove that amount of carbon from the atmosphere.”

Don’t you feel a bit sorry for Elton John here? Ridiculed for sincerely trying to do the decent thing, if you ask me. But let’s take a closer look at the reasoning: since you cannot define how fast an offset can remove a ton of CO2 from the atmosphere, it is dubious to claim any kind of climate neutrality at all. While of course everyone is free to say what they want, the experts from Greenhouse Gas Protocol do have defined carbon neutrality that way.

What is even more worrying, however, is that no alternative is presented: so what if cash were not an issue and you’d want to fly from A to B, there is still nothing you can do? Are we witnessing the need for a new carbon paradigm, one that is better adapted to the urgency of our climate crisis and offers a real perspective for responsible action?

Some formulas being used in carbon sequestration calculations

Before we try to turn this all into a new concept, I want to stress that we really see the added value of sequestration experts, monitoring, auditing and certification. Furthermore, twenty-year cycles do make a lot of sense, as carbon credits bring in the long-term money for monitoring, replanting and livelihood projects – this part of the equation is often overlooked but essential if you want the inhabitants or neighbors of these forests to benefit from the proceeds as well, once the initial funds for planting have been spent. So the critiques seem to focus not so much on the certification schemes itself, but on the claims one can make by using them, i.e. carbon neutrality, due to the perceived slowness and, sometimes at least, unclear environmental impact of the current system.

In general, if you have a powerful but complex tool (think p2p filesharing or blockchain), large scale adaptation only comes when the technology itself is put under the hood of something that is simpler, easier to use like Popcorn time or a Bitcoin payment app. Carbon credit schemes today are in a comparable position: valuable building blocks, but they might be too complex for most customers / consumers and – for at least a part of the audience – unfit to claim climate neutrality.

How could we use these existing building blocks and work with the current system to create something new overnight that could stand tall in the new era of accelerating needs to do something useful in this domain?

Read more in the last part of this series, ‘How to plant it forward?’


Open data for a greener web

5th August 2019

We’ve offered an API for checking domains against the Green Web API for a few years. Now we’ve started publishing open datasets around the green web, that you can use in your own products and services, or for your own analysis. Read on to learn more, about why.

Referring back to our map from an earlier post about why most of the web still runs on fossil fuels

In an earlier post where we mapped out a path to a greener web, we shared this Wardley map below, showing the chain of needs for building a digital product for end users.

We started with the product being available at the top, then ran through all the components needed for this from browser or device to access the product on, ,hrough to web servers, networks and datacentres, all the way down to where electricity comes from.

We also covered a bit about why the majority of the web runs on fossil fuels. For the most part, it’s really hard to find out whether services use them or not, because there isn’t much transparency in our industry about how we power our services, and the knowledge required to even ask this is rare.

We represented this rare, little-known info as the blue node in this map, at the bottom left of the map.

Why most of us end up running services on on fossil fuels without meaning to

Strategies to make this more well known – open data, and open knowledge

There’s a few approaches you can take to make something that’s relatively obscure and poorly known, more commonplace. One approach is to release datasets, ideally with a permissive license to encourage re-use.

ip2geo databases as an example

Datasets already exist, that are freely available, that will let you take an IP address, and get a good idea of it’s the location. Maxmind does this with their GeoLite2 databases:

Because these databases exist, people end up building on top of them to make it easier to add these features into new software. With this data freely available, open source modules like this make incorporating these features an npm install away like with the geoip-lite npm package:

Applying these lessons for the green web – ip2green datasets

We’re following a similar approach, now – by publishing the datasets for free, for others to use.

You might represent this on the map like below – so if you have any product or service that uses IP addresses or domain names, you’d be able to add the ability to show how a service or site is powered yourself.

Our goal, like with the posts we share about how to choose green hosts, or how platform decisions you make affect the carbon footprint of your digital services, is to make this information more widespread and easily accessible.

You can see the datasets we make available on our dataset page – we follow the CSV on the Web convention for describing the data, and they’re licensed under the Open Database License.

These datasets are downloadable as CSVs, and in the near future, will be available as SQLite datasets too.

See the green web datasets available for free

Support and and questions about the data

If you have questions about the datasets, you can contact us for support at support@thegreenwebfoundation.

If you want to use this data commercially, without keeping it open, we offer redistribution licenses that allow you to package the databases with your commercial products. Send an email to the same address.


Packets: yet another lever for a low carbon internet

24th July 2019

In earlier posts we spoke about Platform, and Process as levers for a lower carbon internet. In this post we’ll talk about the impact of sending data over the wire, and what we might to there, if we took the science around our changing climate seriously.

So far, in this series of posts we’ve been talking about about decisions about how you run infrastructure, as well as decisions about where and how your organisation works in terms of helping us meet the goals we collectively need to meet around reducing emissions.

In both these cases we’ve covered cases where your organisation typically has some degree of direct control over the situation. You make decisions about how you buy or build applications, and someone makes a deliberate decision to have an office somewhere, or how much their staff need to travel, and how they travel.

But what about things you don’t have direct control of?

Sending data over internet uses energy, and is hard to measure

When someone uses your service or site over the web, you have at best, limited control over how they do so and what device they use. This is in many ways a good thing, as it can mean more people can use your it.

But at a systemic view, and you’re of trying to trying to reduce emissions, it can complicate things, and I’ll outline why:

Different energy grids make up the internet

Because different energy grids around the world emit different amounts of carbon to generate the same electricity amounts of, even if every datacentre in the world used exactly the same hardware, two people accessing the same page on your site, or watching the same video on your machine would end up causing different amounts of carbon emissions using our service.

One of the best ways to understand this is to look at Electricity Map, an open source project from Tomorrow that shows this using published open data around how clean, or dirty different forms of energy are, which ones are in use at a given time.

Electricity Map, showing Europe
Electricity Map – a map showing carbon emissions of the power we use

Different infrastructure is used at different times to generate power for the grid

In addition to this, depending on the time of day, you will see different kinds of power infrastructure generating electricity for the grid.

If a grid has loads of solar panels, these won’t generate much energy at night. If it has wind turbines, these depend on there being wind to work too. If you can’t buy power from somewhere where it is is light, or windy, or where they have surplus power, you will need to rely on other forms of power generation to take take up the slack. This works at a micro scale over a single day, but there are also factors that affect how much carbon is emitted by the grid, that work over a longer time period.

Over the last years, the British electricity grid has become much greener, simply down to natural gas being cheaper to than coal to use as a fuel.

This also partly down to the gas prices across Europe being their lowest in ten years, but it’s also partly down to carbon pricing, finally starting to influence decisions on how we power our grids.

A huge coal mining machine used by RWE in Germany
The infrastructure behind the internet, in many parts of the world

Similarly, in the last 6 months in Germany, the carbon emitted to generate the same amount of power has fallen by a fifth, because when we price carbon to take into account the harm it causes, it stops being so attractive. As a result, more than half the coal fired power plants in the EU, are now loss making operations.

Before we start celebrating though, it’s worth remembering there are still plenty in operation – we have a long way to go to get to an entirely green grid.

Sending data over conductive wires, once you have dug the holes is fairly efficient. Using wifi to send data inside a building less efficient, and shorter range, but still relatively efficient.

We’re using more ways to access services, and they all use different amounts of energy too.

However, the energy used for sending data over a mobile network is greater than sending data over a wired or wifi, but the numbers vary. If we go with the figures used by the GHG Standards for ICT, we see a 45-fold increase in the energy needed to shift the same amount of data over a cellular connections compared to wired and wifi, whereas if we use the more recent 1byteModel from the Shift Project, we see just a 2-fold difference. We’ll come back to this later.

Also, as we use faster mobile connections, we are send up sending more data, over more energy intensive connections, and because they’re convenient, we end up using them more too.

When you look at markets outside of Europe or America, it’s not uncommon to see mobile data plans like the one below from Airtel in India, offering gigabytes per day, for less than 3 euro per month.

An pricing table, showing Airtel's a data plans. They're a lot more generous than Europe!
Typical tariffs from Airtel in India.

The impact of 5G on all of this

To make things even more complex, the coming rollout of 5G networks also changes the story for how much energy we use.

To give some context, in the UK, a typical mobile provider uses around 17,000 base stations to provide a mix of 3G and 4G coverage to its users. Because 5G networks rely on shorter range base stations to deliver the faster speeds, we need more of them. Quite a lot more.

For the coming shift to 5G, Mentor, a telecoms focussed management consultancy is predicting a shift to using 50,000 base stations over 5 year for the same area. This is per operator, and with 4 main operators in the UK, we end up with around 200,000 energy hungry base stations needed to provide coverage alone, where we had around 65,000 before.

Even if these used the same energy per base station as earlier ones did, just having four times as many base stations to serve less than four times as many people, further complicating matters.

What you can do when don’t have control over the infrastructure used to access your services

All is not lost though.

Even if you don’t have control over the how much carbon is emitted to move data used by a site or a service, you still have control the emissions from a service by deciding how much data you send to users.

The good news is that over the last few years, we’ve recognised that big, heavy pages can be a problem for plenty of other reasons than climate, and we have a decent set of tools that can we can repurpose.

Measuring carbon emissions with Website Carbon

One such example is Website Carbon, from Wholegrain Digital – it measures the size of page, and converts this conversion from gigabytes transferred, to energy used per gigabyte, and finally to carbon emitted for the energy used.

Website carbon - a tool for measuring carbon emissions for a single site
Website Carbon – the easiest way in to measuring carbon

Performance budgets as carbon budgets for websites

This is good for a one-off check on a website, a bit like the one-off green check for a website, that you can use on our home page.

The screenshot of hosting the web
Our simple, one-off webpage checker

However, an increasingly common approach when trying to keep webpage size under control is to set a performance budget, and the regularly check that a page doesn’t go over this budget, to ensure a fast loading, accessible experience.

You can use open source tools like Google’s lighthouse to enforce these budgets, by automating audits every time code changes, or run them on a regular schedule to catch performance issues on live sites:

A sample lighthouse view
A sample google lighthouse report

Lighthouse, but for fossil fuels and carbon emissions

We’ve created a plugin now for lighthouse, which we’re calling greenhouse, that lets you audit how green a page is by:

  • checking that which servers use to load images, scripts and so on from use green power, and giving you a score based on how many of them are green
  • converting the figures for data transfer to carbon emissions using the best methodology we can find, to take into account the things that make it hard to measure.
Greenhouse- showing a report outlining how many domains are green on a page
Greenhouse – google lighthouse but for how green your pages are

It’s open source, with a permissive license, and you can see the code on github. You don’t need to be a developer to contribute, and we have an open issue list you can add to if there’s something you want, and can’t see.

Measuring carbon for sending packets is really hard

It’s easy to get carbon measurements wrong, because most of us have no real intuition for what kinds of numbers seem high or not when talking about carbon. And when looking at the web, this is made even worse by the fact that even within the the academic literature, there is a massive range in the figures used for measuring the carbon footprint of moving data.

And when we mean massive, we really mean massive. We touched on the difference for fixed/wifi connections versus mobile before, but there’s a 20,000 fold difference between the lowest figures and the highest figures in peer reviewed papers over the last ten years.

This is like confusing the distance between driving across a city, and driving to the moon, (follow the links to have a go yourself, or see the original report outlining this collosal difference).

Others are looking at this too

Some of this was covered in the recent Mozilla IRL podcast on the carbon footprint of the internet, where they calculated the carbon emissions from their own podcast, which worked out to aroudn a tonne per episode.

Mozilla’s IRL podcast covering the internet carbon footprint. It’s good!

Because they shared the final emissions, the assumptions about how dirty the grid was, and you can also use these numbers to work backwards to get an rough idea of how many listeners they might have.

You can see how this works in this interactive Observable notebook, where we’ve implemented the 1byte model from French think tank The Shift Project.

A screenshot of the an observable notebook, working out the emissions from a podcast
Our interactive bandwidth calculator, we created on Observable

In this example, we’ve left out out the environmental impact of actually buying a device to listen to the podcast on.

If you want to use numbers like this in future, you’ll be able to use this emissions calculation model in your own applications, as part of the work we’re doing to update this open source carbon emissions dataset, and convert it so they’re installable using the popular node package manager.

Seeing your own internet footprint with Carbonalyser

Until then, it’s worth checking our the Shift project’s Carbonalyser browser plugin – this will give you a running total of the all the bandwidth you’ve used when browsing the web, as well as showing you the carbon footprint of doing so.

Carbon Analyser from The shift project, showing a piece chart with a breakdown of bandwidth in use
Carbonalyser – an internet carbon footprint plugin

Looking at where impact is with packets, video vs web design

When we’re talking about the environmental impact of sending data around, it’s easy to become focussed the planet-saving aspects of website design, but it’s also worth being aware of what else is using the bandwidth in our pipes.

Earlier this month, the Shift Project published a report with a breakdown of the emissions from transferring data, splitting out the impact of video, versus other types of use.

The results turn out to be pretty eye opening, in more ways than one – Video makes up 80% of the traffic sent over the internet now, of that more than a quarter is porn.

This roughly comparable with all bandwidth of the webpages in all the world.

This is also why we care about decoupling the use of the web, from the emissions caused by using the web – even if we were able to design every web page to be the fastest, lightweight version possible, there’s a risk that any savings would be wiped out by the growth in video, as it is sent at ever higher resolution, and used by ever more people.

A breakdown of the use of video around the world, and comparisons to the web. Web is tiny now!
Headline stats from the Shift Project’s recent report on online video

Summary

If you care about reducing carbon emissions from the digital services you use and build, we’ve described a number of levers available in this post as well as showing a number of open source projects we maintain, and use.

We’ve covered Google Lighthouse, our plugin Greenhouse, Website Carbon, The Shift Project’s Carbonalyser plugin, and their accompanying reports, about Lean ICT, and online video, as well as sharing a simple calculator for working out the emissions of sending data over the internet.

If you’d like to talk about reducing the emissions from your operations, and greening your stack of technology, we’d love to chat – follow the link to this contact form to learn more.


Place, Policy, Procurement – more levers for a lower carbon internet

10th July 2019

In an earlier post, we spoke about Provisioning and Providers, in our mental model for effecting change, if we want a lower carbon internet. In this post, we continue describing the model, and explore the elements of Process.

In the previous post, we introduced the Packets, Platform, Process model, which at a high level refers to:

  • Platform – emissions caused by servers and infrastructure you run yourself
  • Packets – emissions caused by infrastructure other people use (i.e the rest of the internet, that you don’t control)
  • Process – emissions baked into decisions about the way you deliver a digital product to your users, or how people are set up to work in your organisation

When we looked at Platform, it should have been fairly clear that we were talking about servers, and if you run a hosting company, you might run and own these yourself.

If you provide consulting, or build digital products, even if you don’t run these servers yourself, they still exist in your supply chain, you’ll still have some responsibility for these, as they are in your supply chain.

More specifically, they’ll make up your Scope 3, or indirect emissions.

Scopes and how to they relate to emissions

Working out who is responsible for emissions can be complicated.

The most widely accepted standards for tracking carbon emissions are generally the GHG Corporate Standard, and there is ample formal guidance online, from groups like ICT Footprint.eu.

But if you don’t have time read those tomes, in general, you can think of emissions as falling into one of three scopes:

  • Scope 1 – Emissions from you burning fossil fuels yourself
  • Scope 2 – Emissions from generating electricity that you use.
  • Scope 3 – Indirect, supply chain emissions – basically the other emissions that happen in your supply chain.

If you wanted to relate them to something everyday, like coffee, you might think about emissions like this:

An example for internet companies

You can see an nice example from online payments company Stripe, who started reporting on this last year in their snazzy looking environment site.

Stripe reporting their scoped carbon emissions, according to the GHG Corporate standard -.

Scope 1 - 320 
Scope 2 - 880
Scope 3 - 16800 tonnes
Stripe reporting their scoped carbon emissions, according to the GHG Corporate standard –

If you check Stripe’s site, you’ll see how they provide a nice breakdown of what scoped emissions are, but you’ll also see that Scope 3 covers more than just servers.

So, when we talk about reducing emissions, it makes sense to think more holistically too – hence Process, in Platform, Packets Process.

Process – Place, Policy and Procurement

There are a number of questions worth asking when thinking about reducing emissions – we’ve grouped them into three areas accordingly:

Place

Where are the services or products being created?

People tend to work better in dry, comfortable, well-lit spaces, and keeping places in this condition typically takes energy – either to provide heat directly, but also in terms of electricity for lighting, cooling, or air conditioning.

Even if you pay for a co-working space, or work from home, if you’re not using entirely green energy, there will be emissions from these buildings that you’ll need to take into account when thinking about the emissions associated with delivering a service, or building a digital product.

How are people getting to there to work on them?

While a growing number of companies support remote work, if people travel to get to work, and they’re not cycling or walking, there will typically be emissions here too.

In fact, when sustainable web agency Wholegrain Digital looked at their emissions and blogged about it, this was the single largest source of emissions for them.

Because they’re a classy bunch though, if you want work the emissions from commuting for yourself, they’ve shared their own template, that you can use for free.

Where else do people go to get the work done? How much business travel is there?

If you’ve made sensible decisions about how you provision your servers, and who your providers are, and you work in a buildings using green power, and you primarily do knowledge work, but then there’s a chance that travel is likely to be another large source of emissions.

And typically, if there’s any flying, that’ll often be the single largest source when looking at travel, because even if flying might be the most efficient way to get somewhere on a per passenger-mile basis, the absolute numbers, in terms of mileage are huge.

78% of emissions for the OECD in 2017 came from air travel
For the OECD, a mainly knowledge work based organisation, the single largest source of emissions was aviation.

To give some perspective, for the OECD, an organisation who literally wrote the book on how to buy green services in the public sector. They generally provide advice for other organisations, so knowledge work, and their single biggest source of emissions in 2017 came from aviation – making up nearly 80% of their total. If your work involves consulting, sales or conferences, you might see similar patterns emerge. You can quickly get a rough figure using the ICT footprint self assessment tool.

However, depending on where you are in the world, there are measures you can take in this case, which we’ll cover in Policy.

Policy

We’ve mentioned some questions related to place, that will help you understand where the levers are for reducing emissions. There’s a few ones around policy worth asking, too.

Does your organisation have a target to reduce emissions in the future?

We’re increasingly seeing countries and states making legally binding targets to reach zero emissions, (meaning it’ll be illegal to be a net emitter of carbon in that country).

Lyft, Stripe, Apple, Google, Microsoft, Bosch, BT, SAP are already at net-zero, after offsetting their own unavoidable emissions.

This isn’t just for huge organisations though.

Small companies like Whole Grain Digital, have public targets to hit net-zero emissions, in addition to a complete open, creative commons licensed sustainability policy you can use as a starting point if you don’t have one yourself.

As they say, if we can do it, so can you. If you need more help, B-corp are increasingly providing guidance and services for small companies wanting to do good, who have traditionally been underserved by CSR companies working with larger clients.

Does your organisation track emissions?

As mentioned before, there are existing approaches used to track emissions, but they’ve typically only been available to larger companies.

However, this is changing.

Small companies like Carbon Analytics are now providing services to calculate emissions by talking to accounting software you use, because you’ve categorised expenditure there already, saving the hassle of duplicating again. If you already use Xero, you can try them free to get a single carbon footprint based on your current data.

Is there an internal price on carbon?

Even if you have an idea of where there are hot spots in terms of emissions, or you know what activities create this, because lots of activities don’t carry their true price, you can still end up with conditions where there’s still an incentive to carry out high carbon activities.

One approach that’s used in a growing number of organisations is to use an internal carbon price, where the impact of activity is tracked and an internal price used to represent the true cost, and create a disincentive for this activity, by levying an internal carbon fee on the activity – usually the emissions, multiplied by their internal price of carbon.

Some organisations put this into an internal ‘green fund’, whereas others use it to donate to charity.

Microsoft is an example of this, and they’ve published a detailed guide on how they do it too, for others to follow, so you can do this yourself.

When there are unavoidable emissions, how do you deal with them?

Generally speaking, if there are emissions left over after efforts to reduce them, it’s common to compensate for these emissions, with offsets or another mechanism or policy tool.

We’ll be writing a more in-depth piece around offsets in the future, as in our organisation we have staff who have worked at every part of the supply chain associated in their production.

Does your organisation support employees who want to take steps to reduce emissions as individuals?

We’ve identified that commuting can be a large source of emissions, even in small, green companies – in this case, supporting remote workers, in addition to increasing the chances of retaining people who have childcare needs, will likely reduce emissions through reduced commuting.

There’s good well curated guidance in these cases, not least from 18F, a government-run internal civic consultancy in the US – they’ve collated guidance,

Similarly, if we’ve identified that a chunk of the impact from running digital services comes from getting to work, then there’s steps we can take for business travel too.

A good example here is the Climate Perks scheme from 10:10 – they’ve created a model travel policy that’s easy to adapt, and a well thought through scheme for employers interested in supporting employees who want to take lower carbon surface travel over flying for business travel (and often as a result, retain them for longer)

Now, if these are reasonable things to ask internally, and we’ve established that emissions from our suppliers affect our emissions too as they are in supply chain, then perhaps it’s worth thinking about which these questions it makes sense to ask – which brings us to our final lever.

Procurement

In most organisations, if you are serious about reducing emissions, the biggest levers will be in your supply chain, and who you buy from.

If you are building digital products, we publish open datasets about the green web, provide an API to check any site online, and as well as open source tools, from simple checkers to browser extensions, and to make it easier to find green providers.

If you’re with a grey provider, it’s often worth starting a conversation with them about becoming a green provider, because one of the main reasons cited by people still running infrastructure on fossil fuels, is that their customers don’t ask for any change from them.

So, this is one of the reasons we’ve started writing about the questions to ask when looking for a good host, and why we’re writing about how to think about your whole supply chain with Wardley Maps.

Work with us on making it easier to switch

We’re now looking for partners to work up with to design a series of workshops to provide something like a “minimum viable audit” – a fast, accessible way to audit and reduce emissions around your digital infrastructure.

We’re basing it around the questions we’ve shared here as a starting point, with the aim of participants leaving the workshop with a clear, concrete set of next steps to take, to help design out the emissions from the digital services we increasingly rely on.

More specifically, we’re looking for partners to work with for two cases:

  1. the level of an entire IT estate, to come up with open, creative commons licensed policy that any organisation can adopt
  2. at the team level, for agile teams responsible for the end-to-end design of digital services. The idea here is to come up with some basic principles for cross functional teams to use, where there isn’t policy yet, to let them incorporate meaningful, measurable changes into the cadence of continuously delivering new versions of a service

Our intention is for these to be permissively licensed, either Creative Commons or something similarly open.

If this interests you, please drop us a line at support@thegreenwebfoundation.org, referring to this post.

(Disclosure – Wholegrain Digital is a partner in the The Green Web Foundation Partner Scheme, but that’s not why we featured them in their post. They’re just the first small company we’ve found sharing open sustainability and travel policies for others to use like this. )


Provisioning and Providers – two levers for a lower carbon internet

21st June 2019

In this piece I’ll introduce you to a mental model you can use, if you are building digital products (like websites or apps), and you want to reduce the environmental impact created by the server infrastructure you’re using.

In the Green Web Foundation, we’ve spent some time thinking about the climate crisis and the role tech plays, and we’ve come to use a mental model to conceptualise where we are able to effect some change. We call it Platform, Packets, Process, and it more or less breaks down as follows:

  • Platform – emissions caused by servers and infrastructure you run yourself
  • Packets – emissions caused by infrastructure other people use (i.e the rest of the internet, that you don’t control)
  • Process – emissions baked into decisions about the way you deliver a digital product to your users, or how people are set up to work in your organisation

In this post we’ll just cover Platform, but we’ll touch on others in future.

Platform – your providers and your provisioning

As we’ve mentioned before Platform here refers to your own infrastructure that you run or manage – so, servers basically.

When you’re talking about servers you have two levers available to you who you choose to get your servers from, your providers, and how much infrastructure you rent or purchase to manage the workload you expect – your provisioning.

Provisioning – matching capacity to demand

In most cases, while we talk about the internet being a global phenomenon, the reality is that we’ll see usage ebb and flow at different times of day, especially if you build something designed to be used at work.

The chart below is from The Power of Wireless Cloud report, and was used to show how the usage profile of the internet changes at different times of day. You can how usage of the internet drops off after midnight, then climbs again, we start using the internet at work, then basically go home to watch netflix, before starting over. If you use Google analytics or Matomo, you may see similar shaped charts in usage for your own sites and products:

A chart showing different amounts of web traffic at different times of day
An example of a traffic curve

Pay for the peaks, and accept the troughs

When it was relatively complicated to bring a new server online, there was an incentive to pay for equipment that could handle the peak usage, even if it would spend most of its time, idling, simply because setting up a big server for only the peaks of use was so complicated. In the words of Adrian Cockroft:

To optimize delivery the best practice was to amortize this high cost over a large amount of business logic in each release, and to release relatively infrequently, with a time to value measured in months for many organizations. Given long lead times for infrastructure changes, it was necessary to pre-provision extra capacity in advance and this lead to very low average utilization.

Evolution of business logic from monoliths through microservices, to functions – Adrian Cockroft

So, you might visualise us buying a single, huge server at like so:

Buying a server for the maximum level of usage.

While this meant we could always handle the top end, it also meant that when the machine was idling, we were essentially wasting CPU capacity, as we paid the same no matter what the server was doing.

Burning money, burning coal

You can imagine all the wasted capacity that you’re paying for as money you were burning, but in a very real sense, because the internet still largely runs on fossil fuels, this means you’re also paying to burn fossil fuels, and emit loads of CO2 into the sky here – bad news for the climate:

Buying a single big server might be simple, but it can be wasteful.

Using smaller servers, and scaling horizontally

In this scenario, if your site or service become really popular, if possible you’d typically scale vertically, by buying an even bigger server, rather than architecting you whole setup.

Another approach has become more popular since then – which would be to use a larger number of smaller, virtual machines working together, that you can spin up and down to match usage.

You might picture this as a bunch of smaller machines, rather than a single monstrous box:

Burning less money, but still burning money (and coal)

This is an improvement over what we had before, as we’re not wasting quite so much excess capacity, but as we scale the number of machines up and down, to match traffic, the scaling is somewhat coarse, and we still end up with unused capacity – which equates to paying to burn CPU cycles, and burn fossil fuels once again.

A newer approach – abstracting the servers away entirely

An increasingly common approach now is what’s referred to as “serverless” – where you’re not really managing servers or scaling them up or down yourself at all now.

Instead of paying for physical servers over months or years, or paying for-easy-to-scale virtual servers over periods of days or hours, you’re now paying for a computing capacity on a per request basis.

When no one is using your service, you effectively pay nothing, and there’s no server running – you don’t pay for any unused capacity, as your services scale down to zero – which also means no unnecessary burning of fossil fuels too – huzzah!

Serverless or functionas as a service

Your lovely golden cage

There’s always a catch here of course.

While paying for compute capacity involves less waste in terms of unused capacity, you have a comparatively small number of providers of this ‘serverless’ way of using computing power, and in many cases, you need to write code that fits the way the platform running your code was designed to get the most from it.

In many cases, more familiar databases or tools might not work the way you’re used to – so what you save in server bills, you might pay for in developer hours as staff try to bend unfamiliar new tools to their will.

This is why, we think it’s useful to think in terms of who your provider is, as well as how you provision computing resources to run a website or service.

Choosing your provider, and the economics of green power

Something special has happened over the last 5-10 years. The cost of renewables has come down so much, that it’s now cheaper to generate electricity with green power than fossil fuels. This changes our diagrams somewhat now, and this chart from Carbon tracker is illustrative.

What does this mean for us now?

The biggest difference is that before, when we were paying for capacity we weren’t using directly, before, we were burning fossil fuels, and emitting carbon dioxide to do so.

All the ideas about being able to save money from avoiding waste still apply.

Two of the biggest players, Microsoft and Google either run on green power, or account for all the carbon emitted by their servers now with offsets or renewables energy credits, as well as providing all these ways to run your infrastructure, and if you want to use a smaller provider you have many options.

Why this matters

We know how objectively bad the climate crisis is, and we know the human costs are getting worse by the day, often hitting the most vulnerable and least responsible the hardest.

So, when you’re not using green power, in addition to making tradeoffs about how much capacity you waste, you implicitly make a decision about how much avoidable harm to other people you’re okay with inflicting.

This is an uncomfortable thing to think about, but we’re grown ups now, and just as we expect other professionals to take measure to prevent avoidable harm in other industries, we can and should expect it ourselves – especially when switching is so easy now.

Summary

As we’ve covered before, there are tradeoffs you make when you go towards a more abstracted, serverless world, in terms of developer time to fit a specific platform, or greater risks of lock-in with a single company.

But if the idea of these new ways of building digital products sounds interesting, and you also care about living on a habitable planet, you can use both of these levers now – you can make choices about who your provider is to save on emissions by moving away from fossil fuels, but also on how you provision the resources you used for your websites, and services.

Helping you choose

We maintain the world’s biggest open database about how the internet is powered – which providers you might be from run on green power, and which ones still use fossil fuels.

We make this information available through our API, our checking tools, and open datasets, for free, and we’re looking for ways to make it available to more people, and groups to design workshops and training materials to help people take this into account when designing new services.

If you think you might be able to help, or you’d like to work with us, you can get in touch via email, mention us on social media by mentioning @greenwebfound, or even drop into our public chat room to say hi.


How to choose a good green host

18th June 2019

Sometimes at the Green Web Foundation, people ask us what’s important to consider when looking for a green hosting. provider. In this post we outline a few criteria to help you choose, and why.

The short version – unsurprisingly, it’s not that different from regular hosting.

Support and responsiveness

You tend to notice hosting the most when things are not working, and either customers, management, coworkers (and sometimes all three) are loudly asking when things will be back to their previous, not-on-fire state.

In cases like this you’ll appreciate clear communications from a support team, and having access to people with deep expertise – if you’re hosting something for someone, and their business relies on this hosting, it’s worth remembering that the real thing your customers are usually buying, is the ability to avoid disruptive changes to how they work.

It’s worth taking into account the cost of this disruption to work, and downtime in general, as it’ll help inform the second criteria, pricing.

Pricing

Once you know what you’re after, and you feel like the hosting company you’re looking at is competent, it’s at this point where it’s worth thinking about the cost of hosting with the organisation, and matching the expected usage of a site to the capacity you want to pay for.

If you’re using any kind of analytics products like Matomo or Google Analytics, and you have an idea of how happy you or your users are with the site, you’ll have an idea of the correct ‘size’ of server you need. The more certain you are about what you need, and how demand will change the more likely it is that you can go for a longer period of hosting, where the costs per month are lower.

If you don’t know this yet, it’s probably not worth committing for a long contract, without being able to make changes – this is doubly so if you’re choosing to use a proprietary platform without a clear migration path away from a provider. This brings us nicely to…

Standards and Lock-in

When you’re running a site with a given hosting provider, it’s important to have an idea of what leaving them would entail, if your needs change, or it turns out that they’re not a good fit.

There are things you can do to mitigate against risks here. Open standards and open source software make it easier to run the same kind of site with a different provider, compared to an entirely proprietary stack of technologies – although the trade off you increasingly tend to make now is about the end to end experience of needing something changed, having a developer or site builder work on it, show it in a staging area, then rolling out the changes to your site or product for your users.

With a growing number of tools, you might have a more specialised, proprietary platform built from a set of open, standardised components that would take a long time or a large investment in tooling to set up yourself – in this case, where will be  a build vs buy decision you need to make.

Where it’s different for green hosts – transparency around how sourcing power

Finally, once you have answers above, it’s worth thinking about how the servers you’re going to rely on are powered. If there isn’t an explicit statement from a given company about how they use renewable or sustainable power, it’s safest to assume the company is using a power from a mix of sources, and you can get an idea of how “green” this power is with electricity map. This shows roughly how ‘dirty’ power in realtime, based on open data about how what kinds of power stations are generating electricity in a given country.

Using just this, you might make a decision to choose a host based in one country over another – the map below shows how electricity in Scandinavia, powered by lots of hydro and wind, is cleaner than Poland, which tends to get most of its power from coal.

The electricity map shows how “dirty” power in given parts of the world are.

If a company is making statements about green power, then they should be able to share details about how their power is considered “green”. In some cases they might source power directly, operating their own infrastructure, but in many cases, they’ll be buying special ‘renewable energy credits’ – tradable certificates allocated to companies that generate power from renewable sources.

This might be referred to as RECS, REGOS, GvOs or variants and while there are smaller differences from country to country, the principle is largely the same – the “greenness” is a tradeable, so a company running on “brown” fossil power can purchase these certificates and then say they run on green power.

Under this scheme, the idea is that money paid for these credits acts as an incentive for further investment in renewable power, speeding a shift away from fossil fuels.

The issue here is that like carbon offsets, the idea of paying money to reward green behaviour in a distant place does not sit well with everyone.

Generally speaking,  if you want to pay for renewable energy credits on the same grid where you are running your servers, you can, but it’s more expensive – just like how buying offsets in the same country where emissions from running servers take place will cost more.

If you’re choosing a hosting company that relies on renewable energy credits, ask how what kind of credits they use, and whether they buy them from the same grid where the infrastructure you use is. They should be able to give you a straight answer here.

The same applies if provider is not relying on energy credits, but offsetting instead – ask how they know the offsets are effective, and if there is some kind of certification scheme in play, like the Gold Standard, or the Verified Carbon Standard Program. It’s typically a warning sign that they are cutting corners if they can not give a convincing answer about emissions either being avoided, or carbon being sequestered from the air – it’s often a case that they are buying ‘lower quality’ offsets.

How does my part of the world compare for hosting options?

We maintain a directory of green hosts based on information in the Green Web Foundation Database, which we’ve been building over the last 10 years.

If, for example you were looking for good green hosting companies in the UK, you’d jump to the listing – once you have that, try applying the criteria above to end up with a host you can trust to keep your site up, without it costing the earth.


Trying an idea – carbon.txt

14th May 2019

Over the last ten years, we’ve been recording which websites run on green power on the internet. We’ve tracked this, and its changed, to create a database about how we power the web, to provide an open dataset to for data-informed discussion about how we want the web to be powered

In this post, we share an idea we’re working on, that totally inverts this model.

Where we are now

Currently, the sad default for powering the web is to use fossil fuels – so just by using the web, we often end up unintentionally causing harm, and harm that can be avoided  by using greener options for powering our infrastructure

This is part of the thinking behind the Sustainable Web Manifesto , which we are signatories of – when it comes to options available to respond to climate, it’s one of the easier ones available to your organisation.

Screengrab of the Sustainable Web Manifesto front page.
We’re signatories and we think you might want to be too

What we mean when we say greener options

We say greener options because if you’ve spent any time looking at the power sector, then you’ll know how complicated it can get – to the extent that something that sounds fairly simple, like sourcing green power to run servers, can end up being much complex than it sounds.

While in some parts of the world, you can buy power from a company that only sells renewable power, there are other parts of the world where the grid is regulated to the point that you don’t even have choice of supplier – if they burn coal, and you need to run a server in that part of the world, there’s no other option.

Elsewhere, there may be places where company A sells renewable power, but also sells the fact that energy has been generated with renewable power separately, so that another company B, can buy this credit, and then say they use green power, even if the supplier they use is relying on fossil fuels to generate electricity for their servers .

And as this diagram from Sonia Dunlop of RE-Source suggests, more you look at it, the more complicated it can get:

Image from Liebreich Associates about the different ways you can source renewable power.
Sonia Dunlop of RE:Source diagam showing options

To save you needing to do an MSc in energy policy before you can say if a site is green or not, we’ve been sharing this draft diagram internally to give some idea what we mean when we say green power – which is what we look for when you check a website using the Green Check API, or use one of our browser addons.

How we do this

You might wonder how we do this – and we currently work this out by running a series of checks against the network for a given website, combined with some manual checks when organisations get in touch to be listed as green providers.

Earlier this year we open sourced the underlying code used in the platform to show make it possible to see how we do this in more detail, but in all these cases, we’ve relied on people signing into the green webfoundation admin site to update information about their infrastructure, which has created more friction than we’d like

So, we’re trying something new.

As an experiment, we’re trying out an approach that uses the architecture of the web itself to make it easier to declare this information we typically ask people to provide in our admin site,  to make it possible to have verifiable, traceable claims of running on green power.

carbon.txt – robots.txt for renewable power

One of the things that helped search engines become such an integral part of the web we know is a convention known as robots.txt.

It made it easier for search engines to collate information about the content on a site, and let people build useful services on top of that data, by outlining what information you want search engines to index.

Inspired by this, and other .well-known approaches we’ve been looking at ways to make it easier to automatically update information about how a website is powered, without needing to sign into an admin site run by the Green Web Foundation, that we’re calling carbon.txt.

Rather than relying on someone in your organisation to sign into an admin website to add information to a database, you’d declare similar information in a carbon.txt file available on your own site, in an agreed-upon format

What specifically you’d put in is something we’re working on, and we have a site set up now, to help work this out, at carbontxt.org.

You’re invited to join the discussion there.

Why this might be better

We have a few goals  in mind with this approach, and we know it’s early days, but these are some of the specific things we want to do with this:

  • to reduce the need to store login data about users updating information about a site
  • to make it possible to build more tooling beyond the green checking tools so far
  • to make it possible to audit a digital supply chain, so you can verify claims of green power
  • make it possible to point to structured, relevant information, related to carbon emissions
  • to cover use cases we haven’t thought of yet

We’re actively looking for partners to work with on this – we think the web should be green, and if you think so too, please get in touch.


Plotting a path to a greener web with Wardley mapping

20th March 2019

I joined the Green Web Foundation this month, and I’ve been funded til the end of August by the German Prototype Fund to work out how to use open data and open source to help speed a transition to an internet running on renewable power.

I figured it might be worth sharing some maps here to help explain my thinking, and hopefully, this should make for an interesting blog post. If nothing else, I it should at least make it easier to understand, how mapping can be used, if you’re new to the technique.

Starting out – let’s map out the value chain for running a digital product

There’s various ways you can think about how you deliver a product or service to users, and one way I find helpful is to think about the value chain.

You start with the need you might be wanting to meet, then you see what’s needed to make that possible. Then you see what’s needed for that, and so on all the way along the chain of needs, until it stops being useful to you for the purposes of planning. If you’re interested in Wardley Mapping, there’s a free book, and many, many videos online.

When I use the term digital product in this piece, I use it  as a stand-in for website, web app, or application otherwise accessed over the internet, typically using a common protocol that browsers can use, like HTTP or HTTPS. I might use the term website or app, but it’s best to assume they’re interchangeable here.

For the purposes of this map, I’m assuming our user is someone responsible for running a service or digital product. This might be a developer, a designer, or it might be a product manager responsible for working with a team of specialists to get something out the door.

Now this might be a bit meta, so bear with me – I’m going to start with our user need as making a digital product available to their users. For that, there’s a few things they might typically need.

Let’s run through them:

A first attempt at a value chain

fig 1. our initial value chain

What do we see here?

At the end-user’s end

First, let’s assume our the eventual end-user of this product needs some kind of browser, which might be on a phone, laptop and so on. We’re also looking at an internet app, so we’ll need some form of network connection for them too.

Typically a browser provides some kind of useful experience by presenting rendered content, which typically is a combination of markup, and some assets (i.e. images, movies, or in some cases, gobs of javascript running in the browser).

At the organisation’s end

As the web has grown up, it’s become more and more full of ne’er do wells, who will try to hack whoever they can. So, let’s assume we want to provide a secure connection too, so when people use our product through a browser, they see that reassuring green padlock.

Of course, there needs to be some process providing this mix of markup, and assets to the people using our service. Usually, this will be some web server we’d be running to generate our pages, or serve files. Increasingly these days, to make things load fast, you’d serve some other files from a content delivery network too.

Updating our product

If we didn’t ever need to change our product, this might be enough, but if we want to respond to what’s happening outside our organisation, we’ll probably want to make changes to content on our site. It’s common to use some kind of content management system, (CMS) to allow a wider range of people to make changes to a site, so let’s add that.

Having somewhere to put this

For the more complex parts of our product, it’s a very good idea to have a repeatable way to get updated versions of our product to our users. These days, there will likely be some kind of repeatable deployment pipeline here.
We spoke about changes before – if we want to make changes, it’s really, really useful be able to keep track of changes we make. So we’d also use some form of versioning or source control, which our deployment pipeline typically relies on.

Having somewhere to put this

If we want to keep track of the changes we make, that implies some kind of persistent storage of data (this might be in a database or just a bunch of files – it doesn’t matter quite so much for this map).

And if we want to make changes or be able to run web servers in the first place, that implies some kind of compute power.

For computing power, and persistent storage, we need servers, which are almost always hosted in some kind of datacentre. These often rely on some backbone-like network connection between datacentres, but adding this doesn’t change the map so much, so in the interests of clarity, I’ve left this out.

Keeping the lights on

Datacentres use prodigious amounts of electricity.

Sadly, right now most of this electricity comes from burning fossil fuels, because this is the norm for most electricity grids.

That it takes this long to start talking about power, and that electricity is so far down the value chain helps us understand why, at present, the internet mostly runs on coal.

We’ll come back to this, but now that we have a value chain, let’s make a map from it, with our handy evolution axis.

For this piece, I’ve tried using Julius Gamanyi’s set of icons for draw.io, as an experiment. You can see my original file here on Google Drive.

Turning the value chain into a map

fig 2. creating a wardley map from our value chain

The next diagram here shows how this looks once we’ve had a go at mapping our value with an evolution axis, for a company that runs its own servers, in its own datacentre.

Later, I’ll sub in concrete examples of specific products or pieces of software correspond to each node, to make it easier to relate to, but for now, let’s focus on the evolution axis.

As you’d expect a lot of what we have is either in the product realm (you’ll pay for a product, or hire someone to install it and customise it, or utility realm (i.e you pay on a metered basis, for something fairly undifferentiated).

For the bits that our end users see, we’ll often have some custom code or content that’s unique to us, let’s put it on the left.

I’ve also added an example below, with some actual products or services you might see in use, to make things a bit more concrete.

fig3. our map with products/software you might recognise if you build websites

Using this map

Now we have a map, we can try using it to help us ask ourselves some questions, like, I don’t know…

What if we listened to science, and we acted like we cared about all the avoidable harm we cause from burning fossil fuels by default in the tech industry?

For the purpose of this piece, let’s take the audacious assumption that, given the choice, we would take steps to avoid tends of thousands of early deaths from particulate inhalation from burning fossil fuels, or avoid contributing to runaway climate change, and all the conflict and harm associated with it.

There’s typically two strategies you will see people using and talking about when the subject of technology and climate change come up.

They’re not mutually exclusive, but they look a bit different on the map.

If we use colour to represent the kind of energy flowing through the map, and the thickness to represent how much we’re using, we can represent both strategies.

Strategy 1: Reduce how much you emit, by matching useful work to underlying usage, and using more industrialised infrastructure

Before we map this, let’s refer to the report by the International Energy Agency’s own report, Digitalisation and Energy from 2017, to see what’s been happening over the last few years, so we know what we’re trying to represent.

The report outlines:

  1. how a shift to cloud to has helped take the edge of the growth in energy use in datacentres, but also
  2. gives some very rough indications of energy used by different components we might show on the map.

We can see this in the charts below, from the report.

fig 4. charts from the IEA report showing changes in energy use

On the left set of charts, we see what kind of kit is using energy in datacentres – servers (i.e. compute), in light green make up the lion’s share of energy, followed by what they call infrastructure (in this case, HVAC, and chillers) in dark blue, for keeping servers cool.

The turquoise and darker green represent storage, and the network respectively, which are both much smaller.

On the right chart we see how the market has changed over the last few years – you can see how energy demand has grown slightly, but not by all that much in absolute terms. You can also see how much of the pink area, representing less efficient traditional datacentres been replaced by the shift to cloud, and hyperscale (i.e. Amazon, Google Cloud, etc) datacentres instead.

What might this look like in our maps?

I’ll show the map pre-cloud, with a very rough estimate of how much energy is flowing through it, then I’ll show a map with a bunch of techniques applied to use more industrialised services, and how they change the map

Remember – colour indicates the kind of energy we’re using, and thickness indicates roughly how much we’re using.

Showing this shift in the map – before

fig 5. pre-cloud, running on brown power

Because so few people think about where the energy running our servers comes from our power overwhelmingly is brown power from fossil fuels. Because in-house datacentres are less efficient than cloud datacentres, they end up needing more energy to do the same amount of work, which means more emissions.

We know compute is a more energy-hungry than the storage and network, so the lines for those are a little thicker accordingly.

Showing this shift in the map – after

Now, let’s look at the map with more appropriate techniques applied.

I’ve grouped them at three levels, Platform, Packets, and Process, which is the mental model I find most helpful and use in talks about the subject, and map fairly well to the official GHG corporate standard for reporting emissions for digital services and ICT

fig 6. our map, post-cloud, still with ‘brown’ power

At the platform level – moving to cloud

This is pretty much the whole premise of cloud – by treating compute like a commodity, it’s used in volumes that make it economies of scale possible, and you end up with vastly more efficient datacentres. Also, the computing you do pay for, you can scale up and down more easily to meet demand.

You can see this on the map with the compute, storage and datacentres much further to the right. Industrialising this ‘pulls’ the other components to the right too, to the point where you’re more likely to pay someone else to run continuous integration for you, and rely on a larger provider like github for hosting code.

You’d also rely on some kind of object storage, like cloud storage from Google Cloud Platform, or Amazon’s S3 service, where you pay in pennies, per gigabyte.

For the same amount of work, all these services are more efficient as they benefit from economies of scale, so we use less energy – hence the skinnier lines.

At the network level – sending less over the wire

Because you pay for bandwidth, there’s a direct incentive to send less data over the wire, but it also has a (relatively small environmental) impact – you save money by being parsimonious, because you need less infrastructure running to support shifting all these bits around, which resuling in lower emissions.

In addition though, when it comes to the web, time is money, so you also end up with a snappier user experience, so people can do what they came to do faster, and in greater quantities. I’ll leave out discussions of Jevon’s paradox here for now, where greater efficiency drives more absolute use, because that pretty much applies to all of capitalism, and this post is already large.

Again, I’ve represented the impact of reduced emissions by making the lines thinner where network is involved, like the content delivery network, network, and assets, and markup nodes to represent less energy used.

At the process level – using appropriate techniques

You’ll often see people in lean or agile circles talk about process when it comes to product development in terms of avoiding waste, where waste is everything that doesn’t directly generate value for end users or customers.

You can see this in the shape of the updated cloudy map itself (fig 6), where there’s less undifferentiated heavy lifting taking place. Also, where services are paid for as a product or service, there’s a often clearer link between the amount you pay to provision, versus your actual usage, giving a direct incentive to think about use.

Buying the knowledge about how to change our map

There’s one thing we should bear in mind here – the knowledge of how to take a product or service, and make a map for a service look less like the earlier one, with all the thick lines, and start to look more like the one with more components on the right with thin lines – is not evenly distributed in the industry.

In fact, it often relies on specialised knowledge and new forms of practice – and because acquiring this knowledge is expensive, the organisations or people who have this tend to be expensive too.

fig 7. showing the new forms of expertise and knowledge you’d hire, to change the map

Because of this, your product needs to be successful enough, or large enough to have sufficient usage to justify hiring these expensive people, in the hope that they’ll create more value than they cost, so you make a return on doing so.

In all these maps, where savings are taking place, they’re largely doing so because there’s a happy alignment where reducing wasteful usage translates to savings being passed along by providers.

This is intrinsically appealing if you work in tech, as now, armed with this knowledge, you might feel like captain planet for just doing your job.

I’d request you keep reading for the second strategy though.

Strategy 2: Reduce how you emit by changing where electricity comes from

The other option, is to look upstream, make a change there, and see the effects cascade through the entire map.

Let’s say we wanted to only run our own infrastructure on 100% renewable power.

Our map might look like the one below. I’ll explain the colouring shortly.

fig 8. switching to green power

Wait. Why isn’t it all green?

The short answer is because the internet.

We can green the parts of the value chain we control directly, but because packets of data travel through the rest of the internet, it gets much harder to have any meaningful influence beyond this.

Think of all the datacentres data has to pass through to reach your device.

Every datacentre it passes through, in every different country, uses electricity that might come from a different energy mix, at different times of day.

So, here, we don’t have direct control over how data travels through the internet – the best we can hope for in the short term is to control how much we use.

How do I know what is green?

Okay, let’s look at just the bits we can control. The problem here now is that if we want to only use green power, we either have to run everything ourselves where we directly control the infrastructure we use, or only choose providers who also exclusively run their infrastructure on green power

Right now, this is an expensive, manual process, and either involves loads of specialist knowledge about the ways you might buy green power, or asking a lot of the same questions of everyone you buy services from, and hoping they know and/or care.

Because this is far from most customers, and pretty tedious to collect, it’s in a really sad, sad part of the map – right at the bottom, and all the way to left.

Sad times.

fig 9. why we default to fossil fuels – most of us don’t know to ask, or how to ask

What if we could make this easier to find out?

What if it was easier to find out how a datacentre was powered?

For the last 10 years, the Green Web Foundation has been collecting data on which sites run on which power, by combining properly nerdy analysis of how networks are built, using commonly available sysadmin tools, and collating evidence about how the datacentres are powered by straight up asking people who run them.

It’s also been tracking when sites have changed, to make it possible to track the shift of the internet away from fossil fuels.

So… what if checking this information was as simple as an API call, against a domain?

Let’s try mapping it.

fig 10. What we’re trying to do at the Green Web Foundation

This is what we are working towards now. We’re opening as much as we can, to get this knowledge out of that sad part of the map, by making it as easy to consume, in as many ways as we can.

Right now

This information is accessible in two main ways – deliberately checking a domain against an API, or using a browser extension to show this to you as you use the web.

Over the coming months, we’ll be making it easier to both access this data, by creating a regular release of open datasets in this field, but also make it easier to add to this via the API, so the data releases are more useful. We want to make it easier to build upon the API, or use the data to come with new things we haven’t thought of.

Competing on green in the world of tech

In the end, this is what we’re aiming for – we want to make it much easier to compete on green credentials, because all things being equal, we think most people, given the choice between comparable providers, would prefer to choose the one that doesn’t have all the human costs of burning fossil fuels associated with them use a service.

So, when and how might this info be useful? We’re not sure, but we have a few ideas

When choosing your suppliers

If you’re looking to choose provider for part of the value chain on this map, it should be easy to find this information, and take it into account then deciding who’s in your value chain. Especially if, like lots of large companies you’ve already publicly said you are doing this.

When you’re finding people to work with

If you’re looking for a new job, when you have the most leverage possible, you should be able to check if an employer is walking the walk, and talk to them about this.

It’s increasingly common to talk about diversity in conversations about finding talent now, and increasingly smart managers hire for diverse teams.

We know who climate change hits the hardest, and we know it hits the people in the communities we want to hire from if we want a more diverse industry.

When helping others rethink how they deliver services

And if you want to use this data in products or consulting to make it easier to help people move away from fossil fuels, and compete on this, you should be able to.

A huge amount of money is spent moving to the cloud, and there’s obvious leverage in moving existing workloads to greener clouds wholesale.

But you don’t need to wait til then.

Just like adding you can tests each time you see a bug in a codebase to clean it up over time, you don’t need to wait until you’re about to shift to a new datacentre to start cleaning up your supply chain. What if there was a ‘green ratchet’ policy, where ever new service going into production had to run on green power?

Help make the web green

One of the reasons we’re taking an open approach with the Green Web Foundation, is that when you’re a tiny NGO, you need all the help you can get, and “open” is a strategy that can help when you don’t have much in the way of resources at hand, to increase your reach.

So, if you’re interested in working with us, from auditing your own supply chain, to building tools or services to help transition the web to run that runs entirely on green power, please get in touch – send an email to contact@thegreenwebfoundation.org, or if you prefer twitter, send a DM to @mrchrisadams.

If you’re just curious, you can alway join the low traffic mailing list for updates as we work on this.


Opening up the Green Web Foundation

4th March 2019

Over the last ten years, the Green Web Foundation has built the largest database to track the shift of the internet to green power in the world. Over these years, we’ve been recording which websites run on which kinds of power, and when they switched from fossil fuels.

Now, over the next 6 months, and funded by the Prototype fund in Germany, fellow environmental web geek Chris Adams is joining the team to work with us on a new project, the Open Green Web. We’ll be releasing a set of open datasets, and open sourcing all the code we use to collect this data. Find out more in his introductory post below.

How we power the internet matters.

If the IT industry were a country, it would emit more carbon than Canada.

What’s more, a huge part of these emissions comes from burning fossil fuels to generate electricity. So, one of the most effective things we can do to reduce emissions in our industry, is transition to an internet that runs entirely on green power.

However, one of the key things slowing down this transition is knowledge – most of us have no idea of what kind of power the services we use run on, and don’t know how to ask this either.

Opening up the green web

I’ve been using the Green Web API for various bits of analysis when trying to understand the carbon footprint of the entire internet, and I’ve been inspired by how projects like OpenCorporates have had an impact elsewhere.

So last year I got in touch with René Post (The Green Web Foundation co-founder), and together we hatched a plan to see if opening up the data and code might help, in speeding the shift to a green internet.

I live in Germany, and if you’re developer in Germany, there’s a fantastic government project called The Prototype Fund, that will fund people to work on Public Interest Tech.

I applied, and the project to open up the data and code in the Green Web Foundation was chosen – huzzah! Below is the moment, I found out, on twitter that it had been accepted:

The plan over the next 6 months

After the initial dancing, and introductions, we have a rough plan now.

Over the next 6 months, we’ll be opening up as much of the Green Web Foundation code and data as we can. We’re focusing on three main areas: trust, ecosystem, and reach, to do all we can to speed the transition.

Lets’ cover these in more detail:

Trust

We want to make it easier to see how we power the internet now, and understand the provenance of the data we publish.

To do this we will:

  • publish a detailed methodology about how we check if a site is using green power or not
  • open source the code for the Green Web Foundation platform, and the browser plug-ins we use
  • update the documentation to allow anyone to understand how it all works, and how to contribute to the project

Ecosystem

Next we want to make it easy to use the data, by extending the APIs we offer.

Right now, the API is read-only. To update the Green Web Foundation directory, you need to sign into the website to add information about a given site.

We want to make it possible to update the Green Web Directory through an API as well as read from it. We want to do this so it can be incorporated into new and existing tools.

In addition, we’ll expose more information in the API responses, to make it possible to do new things with the API.

What if search engines could use this data when presenting results to you, to rank green results higher? Or if you could use this to route packets of data around the internet based on how green each ‘hop’ would be? What if, when building new sites or apps, you could audit the supply chain of services you rely on, to see which ones are still using fossil fuels to switch?

We’ve been speaking to a bunch of organisations about new things we could and we’re looking for more ideas, so if you have an idea, please get in touch below.

Reach

Finally, we will publish open datasets from the data collected, so it can be used for analysis by academia, industry, and anyone else interested.

One goal in particular that we have is to create a dataset that can easily be incorporated into existing resources like the HTTP Archive – so when we talk about the state of the web, we can also talk about how much of it runs on green power, and how much still runs on fossil fuels.

How you can help

So, that’s the dream. It’ll take more resources and time than we have right now, so we’ll need your help.

We’ll be looking for beta testers to try out the new APIs and user interfaces, for partners to work with the data we already have, and we’re interested in hearing how you’d want to use data.

We’re also looking for an advisory board to help us see the problems we’re trying to solve from as many angles as possible, as without this, we know we’ll miss loads of really obvious stuff.

And of course we’re still looking for project partners, to help with the upkeep of running the servers and services we make available.

If you’re interested, please join the project newsletter, at the link below, and we’ll send our first update shortly, about how you can get it involved.

Join the Open Green Web project mailing list for updates.

If you prefer twitter, you can follow me, as I work on it (I’m @mrchrisadams) or the organisation account, @greenwebfound, where I’ll be posting updates over the coming months.

Tally ho!

Post image credit: Image by Pexels on Pixabay