How I learned to stop worrying and love identity assurance

| 1 Comment
| More

The past week has seen a surge in media coverage of the government's new Identity Assurance (IDA) programme, as the Department for Work & Pensions prepares to announce the first group of Identity Providers (IDPs) to be awarded services under their procurement framework. Those who know me will be aware that I played a minor role in trying to persuade the last government to change it's plans for ID Cards, and that I became known as an opponent to that scheme; but for the past two years I've been engaged by the Post Office to support the shaping activities around the the development of the Identity Assurance programme. 

So what persuaded me that IDA is a good idea?

The National Identity Scheme was possibly one of the most ill-conceived and illiberal public sector programmes that the UK has ever seen. The government legislated an architecture that would create a tens of thousands of endpoints, used by hundreds of thousands of users, all linked in to a central database that would provide a 'deep truth' on every person in the UK. Every interaction with the State would disappear into that melting pot, which would become a panopticon of our lives.

ID Card supporters promised that the scheme would defeat terrorism, stop illegal immigration, put an end to serious and organised crime and make our lives easier, but each of these objectives fell by the wayside as the project developed. They promised it would be hosted in a secure database but then had to fall back to distributing the data across three silos, none of which were designed for the purpose. They promised it would be secure, whilst simultaneously having to dismiss public servants by the dozen for misuse of existing data sources. They promised it would be accurate, yet needed to legislate compulsion that we would update our own records. They promised that carrying an ID Card would not be compulsory, whilst mandating registration and usage of that card.

Like many, the National Identity Scheme radicalised me. It provoked me into speaking out against the government, something which I had never considered before. As I worked with the likes of the London School of Economics, the Information Commissioner's Office, and (oddly) the Identity & Passport Service, I believed I'd channelled my inner privacy advocate. But over time I came to realise that in fact my objections stemmed not from a civil liberties motive, but as a taxpayer: I was angry that the government was willing to pay something between £6bn and £17bn (depending upon who you believed) for a system designed to serve the needs of civil servants seeking a 'deep truth' about every individual in the UK, driven by a 'gold standard of identity'. It was designed around their needs, not those of the public. 

The scheme was lunacy. It had to be stopped. And then in 2010, with the new government, it was. ID Cards went out, and the National Identity Scheme was literally put in a shredder. The 'Intellectual Pygmies,' as a former home secretary nicknamed the privacy advocates, had won the battle, and danced their victory dance.

Pygmies (physical or intellectual) they are not...

The intellectual pygmies do their dance. They're not pygmies, intellectually or physically.

But nature abhors a vacuum, and without a clear strategy for population-scale ID, what would fill that space? The Coalition promised it wouldn't be another National Identity Scheme. But politicians' promises can't, ahem, be treated as cast-iron guarantees. A vestigial tail of National Identity Cards still exists in the Foreign National Biometric Residence Permit, and some Opposition MPs still speak of their ambition to bring the scheme back from the dead. If those of us who care about privacy, and about how much tax we pay, wish to drive a stake through the heart of intrusive identity schemes, then we need to build something better to take its place. Something so good that nobody would throw it out. And that's where Identity Assurance comes in.

Surprisingly, the genesis of IDA came from the same government that brought us ID Cards, when in 2008 HM Treasury published Sir James Crosby's report on ID which recommended a federated, not centralised approach that flew in the face of the prevailing policies. Not surprisingly, the government hated it and did its level best to bury it, but it was the seed for the new IDA scheme. 

The IDA approach builds upon tried and tested principles which are already being hammered out by the likes of the Open Identity Exchange, working with a collective of experts, potential providers and pressure groups from the UK and overseas. The IDA programme differs from its predecessors in many ways, in that public bodies can't be Identity Providers (IDPs) - IDPs will be exclusively private sector.

Users can have as few or as many credentials, with as few or many IDPs, as they wish. They can change providers, use credentials for different directed means, and hopefully we will have an environment where any of the cards in their wallet, or their phone, could be usable as a high-assurance credential to interact with government. If they choose not to use IDA then they won't have to - it will augment, rather than replace, existing means of engagement. That said, if IDA is successful then it would make sense for government to scale back other authentication mechanisms if the public choose IDA instead.

IDA gives us an authentication environment that is anonymous, pseudonymous, distributed, and not subject to centralised control. Government doesn't get to track our interactions, our movements, our dealings with our IDPs. The design is a truly user-centric approach which embodies the Government Digital Service (GDS) mantra of "What is the user need?" by treating the users as the end customer, rather than the civil servants. 

It's also a risk-driven strategy that ditches the traditional 'deep truth' about each citizen; instead, relying parties must determine transactional risk, and hence what level of identity assurance they need for any transaction. Simple services such as a request for information about local authority benefits might be achieved using lower levels of assurance from social login (the much-speculated 'Facebook' ID), whereas payout of those benefits might require the higher levels of assurance provided by a face-to-face verification of the user and their proofs of identity. That's a really big change for government, and I suspect that many public authorities will struggle to grasp the idea that they don't need gold-plated identities and attributes to support low-risk interactions.

Under the IDA approach we, the users, are treated as the single source of truth about ourselves. We get to review and update our data. We store it where we want, with whom we choose, and can even delete it if we wish. We can become our own Data Controllers (and it is hoped that in the future the Data Protection Act might be amended to support just that scenario).

And GDS' adoption of the fresh approach to privacy is more than skin-deep: rather than putting their hands over their ears and saying 'la la la' whenever the word 'privacy' is mentioned (as some other government departments were accustomed to doing), GDS created the snappily-named Identity Assurance Programme Privacy and Consumer Advisory Group, which comprises a range of privacy advocates and technology experts who have developed the principles which will dictate the privacy approach for IDA. GDS are also working to ensure that the approach aligns with Kim Cameron's Laws of Identity. 

So where does the IDA journey take us? The logical endpoint is an environment in which minimal disclosure proof of attributes is the norm; that is, that we are able to prove something about ourselves without revealing any other information (Dave Birch uses the great analogy of 'Psychic ID'). Relying parties get to see nothing more than information that is essential to validate our entitlement for the service we request. If - and I know that's a BIG if - we can hold true to the system principles and deliver pervasive identity assurance, we could create an environment where it is normal to assert attributes without even identifying ourselves.

There's no promise this will work. Sure, the technology is tried and tested, but the commercial and policy challenges are huge, and there is still much to be done - hammering out the contracts, legislation changes and cross-government policies is a job that has only just begun. But in an environment where we lack any trusted population-scale online authentication mechanism, IDA is better than all the other options, and I'd rather we run the risk of failure because our ambitions are too lofty, than because they are too low. If IDA can deliver on its promises, then we might just create an environment where the prevailing identity mechanism protects - rather than degrades - our privacy.

And that's why I support IDA.

(This article is based upon a flash talk I gave at the RSA Conference Europe 2012).

(Declaration of Interest: I have been supporting the Post Office's work on IDA).

Proof of age comes of age

| No Comments
| More

It's October, the time of year when another intake of students are released from school into the adult world of university, and fill the pubs and clubs of university towns. These establishments are legally bound to verify that their customers are old enough to enter, and risk losing their license for failure to do so. Under 'Challenge 25' guidelines (in Scotland these are legal requirements), licensees are expected to verify the age of any customer who appears to be 25 years old or younger. In practice, Home Office guidelines mean that to date the only 'acceptable' proofs of age young people have been either a passport, a driving licence, or a PASS card.

However, passports and driving licences are far from ideal; from the licensee's perspective, their staff have to confirm that the photo matches the bearer, and that the date of birth is old enough, and this often has to happen in a noisy, poorly-lit environment. The PASS card removes the need to confirm the date of birth, but has long been subject to criticism that it is vulnerable to forgery (although PASS assert that no fakes have been found), is not accepted everywhere, and certainly suffers from potential transferability between holders, particularly if licensees fail to check the details properly.

From the young person's perspective, passports and driving licences can be a real problem. Many young people don't drive or have a passport, but end up having buy one just to be able to go out with their friends. Passports and driving licences are easily lost or damaged, resulting in a risk of identity-related fraud, potential safety concerns, and a nasty bill to obtain a replacement (a new passport costs £72.50). Young women can be particularly vulnerable if they have to carry and offer proof of ID that includes their name and home address, and in discussion with the NUS in Northern Ireland a few years ago I heard anecdotal stories of students being attacked after unwittingly offering up proof of ID that identified them as living in a predominantly Protestant/Catholic area. 

This is a societal problem whose current solutions fail to properly address the needs of any of the stakeholders. Indeed, ACPO advises against carrying valuable ID such as passports for alcohol-related purchases. Yet the Home Office now acknowledges that the problem of fake ID is in fact dwarfed by genuine ID being passed down, or sold on when expired, which ends up as a valuable commodity doing the rounds amongst the underage. 

It's therefore been fascinating to be part of a new initiative that seeks to address proof of age using a Privacy by Design approach to biometric technologies. Touch2id is an anonymous proof of age system that uses fingerprint biometrics and NFC to allow young people to prove that they are 18 years or over at licensed premises (e.g. bars, clubs).

The principle is simple: a young person brings their proof of age document (Home Office rules stipulate this must be a passport or driving licence) to a participating Post Office branch. The Post Office staff member checks document using a scanner, and confirms that the young person is the bearer. They then capture a fingerprint from the customer, which is converted into a hash and used to encrypt the customer's date of birth on a small NFC sticker, which can be affixed to the back of a phone or wallet. No personal record of the customer's details, document or fingerprint is retained either on the touch2id enrolment system or in the NFC sticker - the service is completely anonymous.

At the licensed establishment, the staff member has a handheld reader which comprises a fingerprint scanner, NFC reader and red/green indicator lights. The customer presents their sticker and places their finger on the reader; the scanner generates a hash from the fingerprint, uses it to unlock the date of birth, and then provides a red/green light to the operator to indicate success or failure. Again, no record of the transaction is retained (beyond statistical data so that the licensee can prove how many checks were done and at what times), and in the event of a failure the operator is not told the reason. Privacy is preserved at all stages of the process.

Touch2id is working with the licensing authorities in Bath and Trowbridge to roll out the service (the launch needs to be regional to ensure that a critical mass of stickers and readers exist in a given area). I was invited down to the Bath University Fresher's Fair to help to promote the service to the new intake, and to get a first-hand feel for their reactions to biometric ID. Clearly the audience was subjective - we only got to speak with students who were interested in the service - but the response was overwhelmingly positive. In approximate order of popularity, their reactions were:

  • "Oh wow, a free USB memory card, how big is it?" (these are students we're talking about, so always quick to spot a freebie);
  • "Does that mean I don't need to carry my passport around?" (correct!);
  • "Can I use it in all the pubs in town?" (not all of them yet, but nearly all);
  • "Where can I get it?" (participating local Post Offices);
  • "Can I use it in the campus library?" (no, but not through any limitation of the technology);
  • "Do you get to store my fingerprints?" (no);
  • "What happens if I lose it?" (if you get a spare sticker then you'll have a backup, otherwise you have to re-enrol from scratch);
  • "What happens if someone steals my sticker?" (some of these students were very hung over so it took a few seconds for the coin to drop that the biometric credential is non-transferrable).
DSC00643.jpg

What we didn't hear - and this surprised me - was any adverse reaction. A few students were initially sceptical, but after asking a few of the above questions, they were quickly won over. I had anticipated vocal objections to the very concept of a fingerprint proof of attributes scheme, but that simply didn't happen. This might be because some have become accustomed to fingerprints in schools, but I suspect it is much more likely that they see clear value in the proposition, without any risk for themselves - unlike other ID schemes touch2id is built around user needs rather than an underlying desire to amass data.

We were fortunate enough to have coverage from a regional ITV News team, who also interviewed Don Foster MP - he's been very supportive of the programme. You can see their footage here.

DSC00656.jpg

What next for touch2id? The team is hoping to expand the West of England coverage, and kick off another region elsewhere in the UK. I'm hoping we'll see the same minimal-disclosure proof of attributes ideas brought into play in other ID arenas - including the government's new Identity Assurance programme - but more on that shortly... 

Declaration of interest: One of my duties for Post Office Ltd is to provide support for the roll out of touch2id.

HM Government Loses its Identity

| No Comments
| More

The government has done something very clever, and people seem not to have noticed. With very little fanfare, it was announced last week that all government departments will share a common logo, that of the Crown, with minimal rights to vary colours and fonts. No more huge rebranding exercises, no more bizarre departmental logos, perhaps even an end to the merry-go-round of renaming exercises that the last administration so enjoyed (I imagine that the DTI BERR BIS will be very pleased to hear that).

This change was apparently driven by Martha Lane-Fox's report, and it achieves much more than just saving money on branding consultants (although that's a worthy aim in itself); it creates an environment in which some of the alleged inter-departmental warfare famously lampooned in numerous political satires is potentially defused, since those departments are less characterised by their branding; it creates a common bond through a shared identity; and most importantly, it is an important step towards proper consumer-centricity in service delivery. After all, do individuals care from which public authority a particular service originates? No. Do they wish to deal with multiple departments to obtain those services? No. Do they have any choice in which authority provides those services? No. So why bother wasting money on promoting the brands of particular departments?

The move aligns nicely with GDS' plans to deliver a single website for government. What would be welcome now would be a similar edict applied to regional authorities, so that we no longer waste money on branding individual NHS or police authorities, or local government bodies. 

CCDP: It's not what you know, it's who you know

| No Comments
| More

The dust has temporarily settled a little on the Home Office's announcement of the Communications Capabilities Development Programme (CCDP), and doubtless some Ministers are now licking their wounds whilst others sharpen weapons in preparation for the fight that lies ahead when the legislation appears before Parliament. That the Coalition could countenance such an illiberal and disproportionate dismantling of privacy rights came as a shock; that they almost immediately fell into the same traps as the last government whilst they tried to justify their arguments was risible.

So what's all the fuss about? CCDP is the logical successor to the last government's abandoned Interception Modernisation Programme, which was intended to create a central database of all telephone and Internet communications traffic. In its new guise, the plan will force Communications Service Providers (CSPs) to maintain their own databases of communications metadata: storing details of all communications over their networks, but not the actual content of the communications. Government bodies will have access to communications metadata under statutory powers, but will not be able to access the actual contents of the communications without first obtaining a warrant to do so. The excellent ORG wiki has a wealth of information about CCDP.

The Coalition has been at pains to play down the significance of the strategy, which is championed by the Home Office, and has been lurking around for some months now, but was thrust into the spotlight by articles in the Telegraph and the Sunday Times. Prime Minister David Cameron assured Parliament that "we have made good progress on rolling back state intrusion in terms of getting rid of ID cards and in terms of the right to enter a person's home. We are not considering a central Government database to store all communications information, and we shall be working with the Information Commissioner's Office on anything we do in that area." The Prime Minister's reassurance should make everything OK. After all, the government's only asking for communications metadata, and doesn't want to store it centrally; and the ICO will ensure that things happen by the book. Isn't that a reasonable and proportionate requirement in the Internet age? Absolutely not.

Let's debunk the facile arguments about centralisation and oversight. The government does not want, nor need, a central communications database in order to monitor our lives, and in fact that would make the job harder: rather than wanting one giant haystack in which to find a particular needle, the Home Office plans to create many smaller, more manageable haystacks, the costs of which can be forced upon the CSPs, together with associated delivery challenges, so that there's a much smaller risk of the implementation failing. In the federated world, there's no point in having a centralised database, when multiple sources can be accessed as easily (or even more easily) than once central one. As for oversight, that's a hollow reassurance given the ICO's impotence at dealing with the most basic threats to privacy and liberties caused by central government departments and major corporations. The Commissioner doesn't have a fraction of the resources required to apply even a veneer of control over public servants' use of CCDP data, and any claim of governance from his office is clearly meaningless.

But the most worrying aspect of CCDP is the mandatory interception of communications metadata. That metadata can provide a richer and deeper insight into an individual's life than any amount of communications content. Simply by analysing the times, sources and destinations, geographic locations, devices and contexts of an individual's communications - as well as taking into account things that they don't do - a wealth of information can be obtained. At a glance, a public servant who has not had to obtain a warrant or apply to a court, will be able to find out where you live and work, with whom you correspond, what your financial, health, sexual, religious, political or professional interests might be, your day to day movements, and from these, your likely intentions.

Consider Google's interest in your online activity: the search giant is actively trying to drop personal data about users because it doesn't need it; what Google is after is not to know who you are, but what you are about to do. If it can accurately predict that, then it can intercept your plans and try to modify them with paid-for advertising. That's how Google makes money. Social networks are very similar. LinkedIn, for example, will allow you to post and browse to your heart's content for free, because it's exploiting that behaviour on behalf of paying advertisers. If you want to see who's looking at your profile then you have to pay hard cash to do so. So the real value in online activity is not in the content, but in the communications metadata, and that's what the Home Office is now seeking: they don't want to mine what you know, they want to mine who you know. Without recourse to the courts, or meaningful oversight mechanisms. Without any form of opt-out mechanism or user transparency.

Fortunately, there are storm clouds are gathering over CCDP. Sir Tim Berners-Lee has spoken out about the scheme, saying that "The idea that we should routinely record information about people is obviously very dangerous…"  Civil liberties groups will be meeting at the London School of Economics on Thursday 19th April to discuss how best to fight the plans, in a revival of the 'Scrambling for Safety' events which were last held in the fight against the National Identity Scheme. The likes of NO2ID and 38 Degrees are pushing politicians to drop the draft legislation before it even reaches Parliament. But if this idea is to be stopped in its tracks, it will require the sort of popular protest that killed ID Cards and must now be brought to bear on CCDP. As the greatest living Englishman says in his Guardian interview:

‎"The amount of control you have over somebody if you can monitor internet activity is amazing… You get to know every detail, you get to know, in a way, more intimate details about their life than any person that they talk to because often people will confide in the internet as they find their way through medical websites … or as an adolescent finds their way through a website about homosexuality, wondering what they are and whether they should talk to people about it."

Rolling Out the Surveillance State

| No Comments | No TrackBacks
| More
When the Coalition came to power, there was a clear manifesto promise to "roll back the surveillance state," including abandoning the much-hated National Identity Scheme and Contactpoint database, and applying much tighter controls to interception of private communications.

This week's announcement of the new Communications Capabilities Development Programme by the Home Office appears to fly in the face of that commitment. Home Secretary Theresa May has already started shuffling through the same weak excuses that the last government used to justify ID cards - we've seen 'prevention of terrorism' and within a day have got to 'protection of children.' At this rate we should reach 'control of immigration' by teatime tomorrow.

The inevitable and justifiable outrage in conventional and social media has already covered pretty much every angle, but I thought it appropriate to dig up a piece I wrote here shortly before the last government left office, when we had heard the old canard "if you have nothing to hide, you have nothing to fear" trotted out by a range of Ministers and government spokespeople. The Home Secretary has just resorted to that last bastion of the desperate illiberal trying to justify an unnecessary attack on civil liberties, and it's time to remember:

"Nothing to hide, nothing to fear" is a myth, a fallacy, a trojan horse wheeled out by those who can't justify their surveillance schemes, databases and privacy invasions. It is an argument that insults intelligent individuals and disregards the reality of building and operating an IT system, a business or even a government."







The Great Liability Sinkhole

| 3 Comments
| More

Building identity management systems is a doddle, it really is. All you've got to do is to knock up a web interface with a database behind it, offer a store for trusted attribute data, tie the lot to a federation standard like OpenID, market to the target user base and wait for the money to come flowing in. Simples.

Oh hang on, that's wrong, I think I may have dreamed that last bit - building identity management systems is very difficult indeed. The problem is there are are still a lot of dreamers out there, and in consequence we see some good, some bad and some downright ugly identity management systems out on the interwebs. I was reminded of this as I examined a service recently - let's call them Yaoids (Yet Another Online ID Service). Like many similar offerings, Yaiods claims to be able to protect every aspect of my modern lifestyle by helping me to prove who I am online (we'll overlook the fact that I rarely feel the need to prove who I am, I already know who I am; what I want to know is who the hell I'm talking to online, and to have assurance that they're not going to talk to anyone else purporting to be me. Top tip for the sales people there).

Like any similar identity service, Yaoids has to overcome a number of challenges, including registering users in a trusted way so that the market can be confident that Yaoids' users are who they claim to be; and maintaining that trust level so that when things go wrong (which sooner or later they always do) then the users don't ditch the service.

These challenges are potentially huge for any provider, and in the majority of cases prove insurmountable. PayPal is an example of a company that has tackled them very well indeed: whilst it has a number of registration mechanisms, for the majority of users, they need to already be in possession of a credit card to obtain service, and PayPal runs a couple of small transactions, with refunds, to confirm the details, and hence that there must be an issuer's KYC check. PayPal has made it easy for service providers to integrate the platform, particularly through its X sandbox environment. Genius.

Yaoids, on the other hand, has attempted to achieve the same outcomes through slightly different means. The service also rides on the back of another company's registration efforts (which isn't a bad thing), but in this case it's an online bank account: the customer provides their e-banking details, and Yaoids uses a third-party service to log in on their behalf to check the account is real, and that therefore someone must have conducted a KYC check on the user.

Have you spotted the problem yet? If not, then I'd like to introduce you to a mate of mine in Lagos who'd appreciate your help in transferring some funds from the estate of a deceased dictator out of the country, because I think you two would hit it off just fine.

Because PayPal uses a credit card transaction to build and maintain trust, the customer is assured that if anything goes wrong they are protected by consumer credit legislation, which generally falls in favour of the customer. If their PayPal account is hacked or phished, then the liability for the loss is transferred onto the card issuer. Of course the credit card companies don't like that, but because PayPal has been so effective at encouraging adoption, they've got little choice but to play along.

But Yaoids has instead left the customer at the mercy of banking regulations, and that's a very different liability story. If you've signed up for Yaoids' service, and my mate in Lagos has somehow emptied your bank account (oops, given the game away there) by some or other unrelated means, then regardless of whether or not the Yaoids service was compromised, you're going to have a very difficult conversation with the bank:

"Hello, this is the Grabbit & Run Online Banking fraud department, how may I help you?"

"My online service has been used to transfer all the funds from my account to Toby's mate in Lagos, I'd like it back please."

"Oh we're so sorry to hear that. Have you shared your online credentials with anyone?"

"No, of course not. Oh, except with Yaoids, who passed it on to their registration subcontractor, but they're all lovely trustworthy people."

"That's as maybe sir, but Grabbit & Run's online banking policies make it clear that we will not repay funds to customers who have handed their online banking credentials to a third party. Sorry sir but we cannot pay you back. Have a nice day. You muppet." <click> <beeeeeeeeep>

And there you go. Out of pocket, out of luck, and left with little choice but to resort to being patronised by the Watchdog team as they interview you about how hard done by you are, blaming Yaoids because that must have been the source of the loss, whether or not Yaoids did anything wrong at all. Yaoids customers then turn and flee, revenues dry up and the service closes.

What Yaoids have created here is a sinkhole for transaction liability: they've sidestepped the very necessary and often expensive step of building a trusted customer relationship, and there is now a mountain of commercial liability being swallowed into a sinkhole, and sooner or later that toxic liability will come pouring out in an unexpected place, destroying customer confidence and taking Yaoids - and its customers - with it.

Not in my back yard

 

This sort of problem isn't confined to Yaoids: most KYC checks want a passport as the document of choice, and there's nothing in the front cover which says:

"Her Britannic Majesty's Secretary of State requests and requires in the Name of Her Majesty all those whom it may concern to allow the bearer to pass freely without let or hindrance and to afford the bearer such assistance and protection as may be necessary ... oh, and she'll see you right if this passport turns out to be dodgy."

It's only society's conventions and habits which render the passport a trusted document for proof of ID outside of border control use cases. The doomed National Identity Scheme expected businesses to rely on ID Cards as their credential of choice, yet made it clear that no liability would be accepted for fraud or error, and that was a key factor in the total disinterest of UK.plc in that scheme (with the exception of those major IT providers who stood to profit).

It's this registration and liability conundrum that the Cross-Government Identity Assurance Scheme is intended to address at the root of its proposition, and at the moment there's every indication that it might just work. By federating existing trust relationships under trust schemes, the identity assurance approach should allow users to reuse their existing credentials - such as online banking - without liability issues, because there is no inappropriate third party, such as an independent commercial identity provider, involved in the relationship. There is no requirement to reveal banking passwords because the bank becomes the identity provider.

But until that happens, take care that when you sign up for an online ID service, it's not trying to hide your liability in a sinkhole somewhere - otherwise the Lads from Lagos will be in touch sooner than you might expect.

Draft principles for the UK identity assurance programme

| No Comments | No TrackBacks
| More
Jerry Fishenden, Chair of the Cabinet Office Identity Assurance Programme Privacy and Consumer Group, has blogged the draft principles for the new identity assurance scheme, with a view to obtaining public feedback on those principles. I'm involved with the Group, and would urge anyone with an interest in this area to comment on his blog so that we can obtain the broadest feedback in order to deliver this important piece of work.

The principles are summarised below; there's a lot of work going on behind the scenes to define the small print that supports these.

1. The User Control Principle
Identity assurance activities can only take place if I consent or approve them.
2. The Transparency Principle.
Identity assurance can only take place in ways I understand and when I am fully informed.
3. The Multiplicity Principle
I can use and choose as many different identifiers or identity providers as I want to.
4. The Data Minimisation Principle
My request or transaction only uses the minimum data that is necessary to meet my needs.
5. The Data Quality Principle
I choose when to update my records.
6. The Service-User Access and Portability Principle
I have to be provided with copies of all of my data on request; I can move/remove my data whenever I want.
7. The Governance/Certification Principle
I can trust the Scheme because all the participants have to be accredited.
8. The Problem Resolution Principle
If there is a problem I know there is an independent arbiter who can find a solution.
9. The Exceptional Circumstances Principle
Any exception has to be approved by Parliament and is subject to independent scrutiny.

Bring Our Bytes Back Home

| No Comments | No TrackBacks
| More
This week's Sunday Times (no link, it's behind the paywall) carries a double page 'exposé' of the trade in stolen personal data from Indian contact centres, data entry services, IT support helpdesks and hosting services. The article describes how undercover journalists were offered lists of personally identifiable information, including bank data, credit records, loan details, card issuance details, account data and other records, allegedly stolen from the likes of Barclays, Lloyds TSB and Sky TV. The black market traders were asking from 2p to £2 per record, depending upon the potential value (driven by content, context and timeliness) of the data provided.

The article quotes a number of horrified 'victims,' (none have actually suffered a material loss) who express their outrage that their details are available, and in some cases claim they know the only possible source for the data, citing conference bookings and IT helpdesks as sources. The authors interview the Information Commissioner's Office, obtaining a commitment that the ICO will investigate, and Richard Bacon MP calls for the government to cease sending personal information overseas (for example, the NHS sends forms to India for data entry purposes).

What the authors fail to do is to interview an acknowledged security expert. Had they done so, they would have realised that this story is hardly news, and it's hardly fair to specifically point the finger at India. The offshoring of data for economic purposes is fraught with risk: services are invariably outsourced to the cheapest bidder, which means that corners are going to be cut somewhere, and information security controls are bound to be squeezed; the cheapest bidder is likely to draw its workforce from an environment where incomes are very much lower than the UK, and that means that the threshold for a successful bribe is much, much lower (almost any security system can be circumvented if the sysadmins collude to accept bribes); firms that offshore their services are rarely in a position to monitor or enforce the arrangement (after all, the whole point was to get rid of the function) and if they do discover something amiss, they're hardly likely to publicise it or to report it to the police or ICO because what can they actually do about it, other than to close down essential business functions (although this does sometimes happen); and even if the police are called in, there is the horrendous cost for the client to liaise with the investigation and bring a conviction, when local officers are also subject to the 'cheap' bribes that the culprits accepted.

All in all, once that data goes offshore, it's safe to assume that it's leaking, and that has always been the case.

What the article seems to reveal is an ignorance - at least amongst the individuals quoted - of the insight that credit reference agencies and data mining companies have into our personal lives, all through legal and regulated means. The claim that data could only have been leaked by Sky TV or a particular bank is hogwash, since those companies consume risk data from credit reference agencies as part of their account provisioning processes, and provide it back again in a reciprocal arrangement to maintain the accuracy and completeness of those records. The difference between the legitimate and black markets for personally identifiable information is how that information is used, and when offshore staff are handling that information on behalf of credit reference agencies, or have access to agencies' data services as part of their day-to-day jobs, then the legitimate data leaks into the black market.

So no big deal there, and no real news story for the Sunday Times. But on the same day the Observer came up with something more interesting that adds a new context: that the government has allegedly reached a 'secret' agreement that access to 'particularly sensitive' personal data on on all 43m UK drivers can be offshored to India by IBM. I'd argue that in most cases the data is unlikely to be 'particularly sensitive' (although photocards can imply the holder's ethnicity, and in some cases records may relate to drivers' health conditions), what is more worrying is the potential for local staff to modify records in response to organised criminals' bribes. The driving license is, rightly or wrongly, one of the most widely trusted identity documents, and if we start to see widespread fraud entering the system (as opposed to the small-scale fraud that will inevitably already be in there) then trust in that document will be undermined. There is a strong likelihood that DVLA's data will be of importance for the cross-government identity assurance programme, so now is not the time to break confidence in that data source.

What's to be done? As Richard Bacon MP demands in the original Sunday Times piece, we need much tougher enforcement of Data Protection laws, but we should stop expecting that to come from overseas: the solution rests in our being able to impose severe penalties upon Data Controllers who are shown to have failed to control their offshored data in an adequate way, and even tougher penalties on companies that knowingly consume illegally-obtained data. That can only happen with reform of the regulatory bodies concerned to ensure that they are suitably resourced and empowered.

For enforcement to work, we need to be able to prove the source of both legitimate and leaked data, and that will require a mandatory change in the way that companies record personal data: specifically, it's time for mandatory metadata to be held, with associated digital signatures, to prove the source and legitimacy of a personal data asset. Only when companies are obliged to cryptographically prove the source of their data will we have any hope of meaningful enforcement. 

Consumers will have to accept some hard facts as well: if they don't want their data to go offshore, they're going to have to pay for it to stay in the UK, because businesses will need to offset the increased cost of UK processing. Consumers also need to understand that most every aspect of their personal history is already out there in some shape or form. We can't delete it because we don't know where it all is, but we might possibly ensure that legitimate organisations can only use it in accordance with the law; and until there is a more effective regulatory regime in the UK, there's little point in trying to bring our bytes back home.

Time to pay for privacy?

| No Comments | No TrackBacks
| More
Google has been in the news again, this time for changes to its privacy practices, which involved consolidating around 60 statements into one to cover all of Google's services, including search, plus, gmail, docs etc. Google claim this was done to simplify the user experience and thus to satisfy demands from regulators who were unhappy about the fractured privacy controls within Google's services. The move is perhaps the biggest single change in privacy management that the Internet giant has yet implemented, and it seems unlikely that such a change was taken without careful consideration of the associated legal and commercial implications. So whatever Google has in mind, they know what they want to achieve. What does it really mean for everyone else?

As Robin points out, one of our key problems is that Google seem to have deliberately conflated 'privacy policy statement' and 'privacy policy: they have not only changed the way that they inform users of how they manage personal information, but they have made a major material change to how they go about that management. Specifically, Google's consolidation approach has resulted in a new policy that they will, if they choose, use personal information gathered across all services. Their policy now permits them to mine data across all of a user's services in order to simplify services, tailor the user experience and facilitate sharing and collaboration - or that's how Google is pitching it anyway; a user might feel that the change permits Google to mine their browsing, networking, mail, documents, shopping and pretty much any aspect of their online experience, in order to force them towards Google's paying advertising customers. The Twitterati were up in arms about the change, and many people took the opportunity to delete their browsing histories before Google had the opportunity to start cross-referencing those against their other online activities.

Of course it's not just Google's web activities, or even just Google that is the problem. Facebook continues to attract criticism for privacy policies that seem to be in constant flux. Google's Android and Apple's iOS platforms have been criticised for mining users' photos and address books through seemingly innocuous apps and for bizarre or obfuscated purposes. What's causing the upset here is not so much what Google have done, but their dominance in our online lives. In a more fragmented market, such as retail or banking, if a company does something that upsets their customers, then those customers have the ability to terminate the relationship and to move to alternative providers. If sufficient customers do so, then the company takes a hit to its bottom line and changes its ways. But Google and Facebook in particular have achieved a dominance in our online world that makes it very difficult to avoid them. Users who choose to avoid Google find themselves marginalised and forced to use disjointed services from a range of providers. Those who opt out of Facebook (or any other social network for that matter) are left without networks that others enjoy. Opting out is not an option for many.

Our problem is not Google, or Facebook, or privacy legislation, or market regulation, or a lack of user-centricity in system implementations. Our problem is the underlying commercial model whereby we expect to receive these services for free. These companies deliver previously unimaginable richness of interaction without charging us a penny in cash for the experience. A substantial amount of data mining is essential if they are to create that richness, but the root cause for our lack of control over that mining is the fact that we're the product, not the customer. The money flows in from the advertisers and affiliates, but as providers fight to meet shareholder expectations for revenues they are having to push harder and harder for our data, and take increasing risks with our privacy to produce the profits.

So what's to be done? We can't put the genie back in the bottle, our data is out there, and it's not going to disappear from the interwebs in a hurry. There's no point in speculating about breaking up Google's control over the online world. It also seems improbable that competing systems with different business models will emerge in the near future; for example, the Vendor Relationship Management (VRM) approach championed by the likes of Mydex clearly has the potential to address the problem, but it's still a long way from gaining the sort of momentum that will shake the big players. What we really need is a way to pay Google, Facebook et al for their services using hard cash instead of personal data. For example, if I could pay a small monthly fee to guarantee that an Android phone would never mine my data, and would in fact create a 'walled garden' environment to protect my privacy, then my iPhone would be up on eBay in a flash. If Facebook offered me enhanced private service with proper granular privacy controls with a certainty that my usage and relationships will never be analysed by them or a third-party app unless I expressly consent, then they'd get my monthly payment.

But a step such as that will require these companies to expose the dark heart of their business models, and that will not happen in the current economic climate. If they admit each customer is worth on average, say, £20 p.a. to them, then all those who don't pay up will be demanding to be paid for their data. If they admit each customer is worth, on average, say, £2 p.a. to them, then their shareholders will be howling at their grossly inflated market capitalisations. Reputation businesses such as Klout exist to help these companies to assign a value to individual users, but being told your friend's data might be worth £10 p.a. to Facebook, but yours isn't worth £1 is hardly going to curry favour with Facebook's users. The providers can't win if they go down this route, at least not until a price point is found that satisfies consumers and shareholders alike, or a disruptive new venture enters the market and forces their hand.

So that's the challenge for the market, and in particular for VRM providers: if we want privacy *and* open data *and* free services, we need a way to make that more attractive to the major incumbents than their current business models. They need to see that they can make privacy pay without jeopardising existing revenues. And we all need to get ready to pay for our privacy.

Weekend spring cleaning

| No Comments | No TrackBacks
| More
Bored? Killing time before the excitement of Monday's return to work? No, me neither, but either way Lifehacker is carrying a good set of tips to lock down privacy in your personal life, which is particularly timely in light of this week's International Switch to Firefox Day (more to follow on that).

Covering recommendations to block and monitoring web tracking (I use Fluid and Ghostery), mobile device privacy and basic home computer security settings, it's worth a look if you're after a bit more control over who follows your personal life. 
Enhanced by Zemanta






Poacher turned gamekeeper?

| No Comments | No TrackBacks
| More

Welcome back. Or if you've not been here before, Welcome. I've rather neglected this blog for the past two years because of other commitments, but hopefully I'm now in a position to restart writing about privacy and identity-related issues and what they mean to government, industry and individuals.

 

It's certainly been an eventful few years for privacy and identity. The election saw the new government follow through on manifesto commitments to terminate the National Identity Register and Contactpoint programmes, and in consequence there has been a slight shift in civil society interests towards monitoring the activities of private companies, in particular search engines and social networking sites.

 

Meanwhile, the government has been developing plans for Identity Assurance, a new approach that will allow individuals to access online services by reusing existing trust relationships they hold with commercial providers, rather than a government-issued credential. It's a complex but clever idea that will shift government thinking away from the 'deep truth' and 'gold standard of identity' philosophies of the past, and instead uses a risk-based approach that should, hopefully, leave individuals in control of their online relationships whilst protecting their privacy. If we can learn from the mistakes of the past then we might just end up with a good foundation upon which to build privacy-positive ID services.

 

In these past few years, my role has shifted, although whether that has been from poacher to gamekeeper, or the other way around, I'm still not sure: I've been working with the Post Office for the past 18 months, in a role that is closely linked to the Identity Assurance programme, and for that reason there will be aspects of the subject that I am not in a position to discuss because of confidentiality agreements and public procurement rules.

 

With that in mind, roll on the blogging!

Enhanced by Zemanta

CONSENT Survey

| No Comments
| More

The CONSENT project - a collaborative project co-funded by the European Commission under the FP7 programme - is seeking opinions on the use of personal information, privacy and providing consent online. You can participate in their survey here.

DotGovLabs opens for business

| No Comments
| More

If you're an innovator with public service delivery ambitions, then you may wish to take a look at DotGovLabs - DirectGov's Innovation Hub which brings together 1,600 SMEs, entrepreneurs and innovators looking at how digital can help solve social challenges. Part of the government's Skunkworks programme, the Innovation Hub aims to help government to engage with experts in digital delivery.

The Innovation Hub was, until recently, only open to invited participants, but now that a critical mass of users has been reached, it's been opened up to anyone who wishes to register.

Declaration: I have no connection with DotGovLabs other than being a registered user.

(Please excuse the lack of posts in recent months. I've been heavily involved with aspects of the new cross-government identity assurance initiative, which has taken up all of my time. I'm hoping to be in a position to talk about that programme very soon).

Fines aren't working: time for a Data Protection Offenders' Register

| 1 Comment
| More

On Tuesday the Information Commissioner Christopher Graham announced the outcome of his office's investigation into alleged security failures by ACS:Law, and the imposition of a £1,000 fine on the company's owner, solicitor Andrew Crossley. This case demonstrates why the Information Commissioner's Office is failing to apply fines in a meaningful manner, and we need a fresh approach to data protection penalties.

The ACS:Law case

In 2009 and 2010, ACS:Law sent approximately 10,000 letters to individuals accusing them of breach of copyright through peer-to-peer file sharing technologies, and threatening them with legal action unless they settled the claim out of court (typically with a sum of around £500). Whilst the number of victims who gave into these threats is disputed, Crossley himself allegedly claimed to have recovered over £1m from suspected copyright infringers.

Lists of suspects were provided by major ISPs such as BT and Sky Broadband, and from the very beginning there were anecdotal tales of incorrect or non-existent evidence of any copyright breach; it appeared that in many cases there was simply no way to substantiate the claims which gave rise to the threats. The media and public were outraged, and it was at that point that hacker collective 4chan waded in with a denial of service attack on ACS:Law's website. However, that attack revealed unexpected results. It transpired that ACS:Law had stored its claim files in unencrypted form, and as the company restored its website from backup, those files were accidentally copied over. The files became publicly visible - a list of around 6,000 defendants was revealed, including personal details, payment information and details of their alleged copyright infringement.

In September 2010 the ICO investigated the alleged breach, and it became clear that not only was the claim file accidentally published, but in some cases the information provided by ISPs had been transferred in unencrypted form on memory sticks, and was stored in an online service that was not intended for business use. Apparently ACS:Law did not seek any professional advice on how to protect that information.

So, in consequence, yesterday the ICO issued a fine of £1,000 to Mr Crossley personally (since ACS:Law has now ceased trading), stating that were ACS_Law still extant, the fine might have been closer to £200,000. If Mr Crossley pays up by 6th June, the fine will be discounted to £800.

Why the ACS:Law fine undermines the ICO's credibility

There is an important legal principle to protect company directors from the full extent of company liabilities where there has been no misconduct - otherwise no-one in their right mind would become a company director. There is an even more important principle that individual laws should not be used to punish individuals where their moral or legal misconduct cannot be prosecuted under more appropriate legislation - in other words, the Data Protection Act (1998) shouldn't be used to punish ACS:Law for other alleged failings. But in this case, the Information Commissioner really does seem to have failed to applied a proportionate fine.

ACS:Law had a single employee in the form of Mr Crossley. He is a solicitor, so he cannot claim ignorance as a defence for failing to comply with the Data Protection Act. He has been able to escape his punishment by winding up the company, even though what allegedly occurred in ACS:Law cannot possibly be the fault of anyone else but himself. ACS:Law and its director should not be able to escape the full penalty for breach of the Act.

By applying a fine that is proportionate to the director's ability to pay, the ICO has made it clear that a company's directors can escape full censure simply through their accounting declarations. There really is nothing left to fear for companies that wilfully abuse the Data Protection Act, since that abuse has become a simple risk decision, and there is no meaningful obligation for them to comply. The ICO's ability to enforce the Act has been critically undermined by this case.

Applying an appeals process

A far more appropriate way to enforce data protection penalties would be for the Information Commissioner's to apply its fines regardless of the recipient's ability to pay. The only proportionality in the basic fine should be against the size of the business concerned, not whether it has fallen on hard times since its original breach of the Act. The fine can then be suspended or reduced subject to a public appeals process - as opposed to discussions behind closed doors - where the recipient argues their case for reduction.

Introducing the Data Protection Offender's Register

We also need to introduce a new concept of 'being struck off' the register of Data Controllers.

Just as a prosecuted company director can be prevented from holding that office again for a set period; or a professional might be struck off by their professional body and hence lose their license to practice; or a driver found guilty of repeated or serious motoring offences may be banned for a period; so individuals found guilty of knowingly mishandling personal data should be legally prevented from doing so again for a set period. This could include:

  • banning the individual from registering as a Data Controller;
  • banning the individual from setting or managing company policy for the handling of personal information;
  • banning the individual from handling personal information in their professional capacity without supervision form another individual (much as a learner driver may not drive without a qualified driver in the passenger seat);
  • forcing the individual to declare their ban to any future employer within the period of censure;
  • applying a further fine or criminal conviction in the event of breach of these rules.

This new regime of applying fines regardless of the individuals' ability to pay, followed by an appeals process; and then forcing convicted individuals to sign a register of Data Protection offenders, there would be a meaningful way to enforce the Data Protection Act. Until then, the Information Commissioner's efforts are likely to have very little deterrent effect, and incidents such as ACS:Law will keep happening.

The Department of ‘No’

| No Comments
| More

(The following article was originally published by Big Brother Watch in their book "The state of civil liberties in Modern Britain").

If the government is serious about its policy objectives of slashing administrative costs, bolstering the UK's cyber defences, moving away from proprietary software systems, putting data into the Cloud, and treating personal data with the respect it deserves, then it is time to reassess the role of information assurance and how it is delivered. There is a pressing need to reform the information assurance function so that we have proper security governance, and so that information assurance supports, not hinders, the government’s policy objectives.

Public Sector Data Leaks

With the announcement of a £650m budget for cybersecurity, coupled with the axing of defence infrastructure that until recently would have been considered critical to the protection of Britain's national interests, Prime Minister David Cameron has delivered the unequivocal message that cybersecurity is a cornerstone of the UK's broader defence interests. UK defence companies will be switching their research budgets away from military hardware and into homeland security products, and information security companies around the world will doubtless be examining the UK security market, keen to get their share of the new government spend.

All this has to be a good thing for the central and local government authorities who have seen public confidence in their ability to protect information eroded by a seemingly endless string of high-profile data loss incidents. Ever since Chancellor Alastair Darling informed Parliament that HM Revenue & Customs had misplaced the details of child tax credit claimants, we have been bombarded with reports of files left on trains, memory sticks dropped in the street, emails accidentally sent to the wrong mailing lists, hard disc units lost, laptops stolen from cars; and despite senior managers time and again promising the Information Commissioner that 'lessons have been learned' the incidents keep on happening. Public authorities appear to be incapable of protecting information. What can possibly have gone so badly wrong with information assurance that our authorities are apparently unable to keep anything secret, at a time when the Prime Minister tells us that our cyber security has never been more important to the nation?

The Department of ‘No’

The UK government’s information assurance function is distributed across government through a number of agencies. Perhaps the best known of these is CESG (formerly known as the ‘Communications Electronic Security Group’ of Government Communication Headquarters), the national technical authority for information assurance. Based in Cheltenham, and reporting to the Cabinet Office, CESG is tasked with delivering a range of products and services including threat monitoring, product assessment, advisor training and system testing.

The information assurance function is not exclusive to CESG. The Cabinet Office has a Security Policy Division (COSPD) which produces part of the Security Policy Framework (SPF) that replaced the Manual of Protective Security (the government’s primary standards document for information assurance), and CESG produces the rest of the SPF. The National Cybersecurity Strategy also sits within Cabinet Office, but focuses more on protecting the broader Critical National Infrastructure (CNI) from major disasters, terrorist threats, foreign intelligence services and serious/organised crime, than general systems security. The MoD uses equivalent standards and administration internally, which refer back to the products and services provided by the government’s other security centres (all of which have a common root in standards that evolved into ISO/IEC27001:2005), but which operate completely separately. Other parts of the security governance function are fragmented across many committees and boards.

Significantly, this substantial infrastructure is focussed mainly upon advisory services rather than actually implementing and managing systems security: that burden falls upon the Senior Information Risk Owner (SIRO) in individual public authorities. This individual, who should ideally be from an information risk background, is the focus for information assurance delivery at a Board level within their authority. In smaller bodies, the role of SIRO is often shared with other duties such as Chief Information Officer.

The Cabinet Office has recently established the Office of Cyber Security and Information Assurance (OCSIA) , which has yet to have an opportunity to reform the information assurance function, but publicly appears to be more focussed upon the cyber defence agenda than the day-to-day mechanics of running information assurance.

With this advisory capability, one would imagine that the government’s information assurance function would be robust and strong, drawing upon a wealth of shared expertise that is delivered in such a way that security enables and supports service delivery. Unfortunately, all too often the opposite is true.

Cost-Effective Information Assurance? The Department Says ‘No’

Government lacks a focal point for information security: there is no ‘Government Chief Information Security Officer’ or ‘Office for Government Information Assurance’ - in other words, no one individual or organisation accepts accountability for the proper governance of data in the public sector.

The fragmented approach to information assurance has developed over many decades, and the cultural unwillingness for government bodies to accept responsibility for an issue as ‘toxic’ as information assurance has left the subject in the long grass as far as most CIOs are concerned. Even the proliferation of Quangos under the last Labour government did not lead to the creation of a body that might deal with this critical issues, despite some of the highest-profile data loss incidents ever to impact the public sector occurring during their term of office.

Instead the various bodies tasked with information assurance focus upon their own jurisdictions and rarely cooperate successfully: the MoD does not discuss its security standards, although they are little different from those in use across the rest of government; CESG and COSPD will only release information to suitably cleared individuals, and rarely reference each other’s work. Each department and agency has to pay to support its own security infrastructure rather than drawing upon the economies of scale that might be achieved by a central security team working for the common good of government. The information assurance environment is far from cost-effective.

Information Risk Management? The Department Says ‘No’

This lack of cooperation doesn’t just mean that key activities are duplicated: it also means that without support from their managers, those tasked with protecting systems are afraid to take risks, for fear of being blamed if an incident occurs.

The problem is that information assurance is not about absolute control, and any professional security manager will acknowledge that there is no such thing as 100% risk avoidance. Instead, it is about assessing the information risks faced by the organisation, developing mitigating controls and actions, and ensuring that they are managed properly so that the risk levels are reduced to a point where they are proportionate and acceptable.

This means that incidents will always happen. This may be because security controls are judged to be disproportionately expensive (for example, spending many millions of pounds on security to protect assets worth only some thousands of pounds); because individuals failed to comply with the instructions given to them (for example, downloading unprotected files on to a memory stick to take home, then losing that memory stick); because the system is attacked by a capable and dedicated enemy (for example, an authorised user taking copies of MP’s expense claims); or because of a ‘zero day’ exploit (for example, a hacker breaking into a system using a weakness that was previously unknown to the security officer).

Whatever the cause, security incidents will always occur, and the public sector culture is to look for someone to blame - remember how the HMRC incident was almost immediately blamed upon a ‘junior clerical officer’ before it was revealed that systemic failures were at the root of the problem? Security officers are rightly fearful of being blamed for incidents, and in the absence of someone who will act as an advocate for them when things go wrong, they are forced to fall back on the only safe path available to them, which is to say ‘no’ when the business wants to do anything which might carry an associated security risk. The likelihood of the current information assurance community being willing to support the government’s cloud computing ambitions seems slim indeed.

As a result, most public servants view information assurance as an obstacle, not an asset. Because of poor leadership, excessive bureaucracy, and a culture of unnecessary secrecy, public authorities are unable to obtain cost-effective information security controls. The current infrastructure will neither permit nor support the new commitment to respecting personal data, making government data available, or protecting data that needs to be kept secret.

Secure Systems? The Department Says ‘No’

Ironically, the culture of ‘No’ has not resulted in better security within the public sector. Project managers, afraid of having their plans thrown into disarray by uncooperative security professionals, simply avoid seeking security advice. Enterprising users who need to get their jobs done seek out risky ways to bypass security controls because the security departments won’t allow them to get on with what they have to do. For example, it is common to find use of unauthorised online file sharing services to exchange information because the security department has shut down USB memory sticks and CD drives without providing an alternative. That’s how accidents happen.

The problem has even deeper consequences outside of Westminster and Whitehall. Most important standards, guidelines and publications are protectively marked such that they are only available to individuals with appropriate levels of security clearance working on appropriately secured PCs. But local government bodies, for example, rarely conduct background checks on their staff beyond a basic criminal records check, so individuals tasked with securing local authority systems don’t know how to secure them in line with government requirements because they are not cleared to see those requirements - and aren’t allowed to hold copies because their PCs aren’t sufficiently secure. Without the intervention of costly consultants who have the correct clearances and computers, this paradox can’t be broken.

Those consultants are a very special breed indeed. Only the few hundred members of the CESG Listed Advisor Scheme (CLAS) are officially qualified to provide security advice across government. They hold the necessary clearances, and have access to CESG’s source materials. What they do is not particularly ‘special’ compared with their private-sector colleagues, and because the pool of available talent is so small, and the barriers to entry are high (CESG only accepts a limited number of candidates once a year, and they need to pay a substantial fee for clearance, acceptance and training), public authorities have to draw from a relatively small - and therefore uncompetitive - pool of consultants for their information assurance advice.

CESG has for some years been attempting to move parts of the CLAS environment into the private sector, but this has yet to deliver any significant change in the way that systems are secured. The outcome of this ‘closed shop’ is that local authorities and arm’s length bodies very often fail to comply with government security standards simply because they don’t know that those standards even exist, and if they do, they can’t gain access to either the standards or cost-effective individuals who are able to assist them. We therefore have a public sector environment in which the prevailing culture and practices conspire against effective information assurance.

Privacy by Design? The Department Says ‘No’

Clearly a public sector that struggles with information assurance will also struggle to respect privacy: if personal data cannot be kept secret, then it cannot be kept private either. But public authorities’ inability to effectively manage personal data runs much deeper than that, since CESG’s formal policies until recently simply didn’t get the idea of privacy. Formal risk assessment processes tried to assign protective markings according to the volumes of personal records rather than the sensitivity of the data: so, 999 personal records might be considered Not Protectively Marked, whilst 1,000 would be marked at the higher level of Restricted. Authorities could circumvent the more onerous controls by simply breaking databases down into smaller files of less than 1,000 individual records.

What’s more, those risk assessment models are designed around an assumption that authorised users are always trustworthy - how else could designs such as the ill-fated Contactpoint, or the NHS Summary Care Record, be allowed to exist where hundreds of thousands of users can access millions of individuals’ sensitive private records? The risk assessment processes treat individuals as low-value assets whose privacy is significantly less valuable than, say, a Minister’s public reputation.

Private companies, and in particular those in the financial sector where the FSA has demonstrated an appetite to impose punitive fines for misuse of personal data, woke up long ago to the need to show greater respect for personal data. The public sector, where senior public servants are rarely held accountable, and the sternest sanction generally applied is a letter from the Information Commissioner’s Office, has not kept up with the change. In their defence, CESG have made some positive revisions to their personal data handling rules in recent years, but much more needs to be done if the government is ever to meet individuals’ expectations of privacy.

Open Source Software? The Department Says ‘No’

The new government’s commitment to open source systems represents perhaps the greatest challenge that the information assurance community has faced in many years. The use of software that has been collectively developed, with publicly available source code, flies in the face of long-established security policies and practices, which have traditionally demanded that source code comes from an approved developer, is scrutinised for vulnerabilities, and is kept out of the public domain.

In general, software and hardware vendors are expected to have their products pre-tested for use in government systems (something which is not required in the private sector), and to pay up front for that testing. CESG has a number of services such as the CESG Claims Tested Mark (CCTM) and CESG Assisted Products Service (CAPS), that are used to test the security of products that are being sold to public sector organisations. When private companies sell to government, they can justify the expense of the testing process, since that will grant them access to a lucrative new market. But the same does not hold true for open source software: in the same way that drugs companies won’t pay for clinical trials on products that they can’t patent, vendors won’t pay for the testing of public domain software when they cannot expect to charge for it at the end of the process. Furthermore, the test processes are notoriously long-winded and complicated, so even vendors of proprietary systems are reluctant to invest in them. Whilst not all products have to be subject to this test approach, failure to demonstrate test approval can count against them during procurement, and as a result public authorities are driven towards a small number of approved, tested, and often outdated technologies.

Once products have been selected, and designs are in place, the complicated process of accreditation begins. Security Officers - or more commonly CLAS consultants - conduct a tightly-proscribed risk assessment that is used to determine whether the system requires formal accreditation (a certificate to prove that a system is fit to handle a given level of data, and to interconnect with similarly secure systems), and potentially to prepare a Risk Management Accreditation Document Set (RMADS) that is used to define security controls. Accreditation of open source systems, where there are no vendors to make assertions about security levels, is very difficult indeed using current processes.

But accreditation isn’t the end of the problem: like all software, open source software requires patching and upgrading to keep up with technology developments and newly-discovered security vulnerabilities. Without a vendor to pay for security testing the patches and updates under the current regime, open source software will remain largely inaccessible for government.

In the private sector, where there is no obligation to verify the security claims of systems vendors, and organisations can select their own risk assessment approaches, these problems simply doesn’t exist. Independent testing schemes can be used to provide customers with greater assurance of security capabilities, but in general market forces drive vendors towards delivering secure systems, since a major failure will count against them in procurement processes. Government’s open source goals will remain hampered by information assurance until a new way of dealing with the security of open source software can be developed.

The Department of ‘Yes’: Treating Information Assurance as a Business Enabler

The relative success of private-sector security practices, and the fact that large corporations do not struggle to manage information security in the way that government does, shows that it should be perfectly possible to move to an environment in which information assurance helps, rather than hinders, delivery of public services. In particular, we need to ensure that:

        information is available to all that legitimately require it, is appropriately protected, delivered, and of assured integrity and accuracy, so that information can support the needs of government, industry and individuals;

        public confidence in the ability of public authorities to handle personal information is restored;

        public authorities can adopt open source systems and cloud technologies without security being a disproportionate burden;

        public authorities break away from the negative mentality of information assurance that blocks innovation, and instead move towards a new culture that is able to support, rather than hinder, the delivery of new technology policies.

The significant changes in government IT policy, the shake-up of its delivery driven by the spending review, the government’s commitment to cybersecurity as a cornerstone of the UK’s defence strategy, and the establishment of the Office of Cybersecurity and Information Assurance (OCSIA) within the Cabinet Office collectively drive the need for reform. OCSIA may be the best hope for achieving reform, but will only succeed if there is a collective will in that office to do things differently. Ministers and senior civil servants are too quick to defer to Cheltenham on the assumption they are ‘the experts,’ when evidence suggests they are behind the times and operating in a mainframe mindset in an Internet age. They have become unaccountable arbiters of what does and does not happen, and what can and cannot be used. There is a clear need for leadership, to remove duplication of responsibilities, to improve availability of security standards and technologies, and to change the way that security is perceived across government. There is also a need for greater participation by local government and the private sector, whilst recognising that some aspects are better suited to remaining under government control.

A few simple actions would suffice to create the ‘Department of Yes.’

1. Appoint a pan-government Chief Information Security Officer as a new focal point for information assurance

Just as a large company would be expected to have a Chief Information Security Officer (CISO), the OCSIA should appoint a Government CISO responsible for the proper implementation of information assurance across government. This role must not be one that is any way combined with the cyber defence agenda (which invariably becomes politicised and distracted from the day to day running of information assurance), but rather a ‘hands on’ leadership position that provides a figurehead for information assurance issues. Bringing in a CISO from industry, rather than public service, would ensure a break with past practices and a fresh approach to the task in hand.

2. Create a government CISO Council

The Government CISO should chair a new Government CISO Council within the OCSIA. This group, comprising CISOs and/or SIROs from all major parts of government, should act as the focal point for all information assurance issues, and hold responsibility for development and maintenance of security standards, accreditation, product certification and professional development across government. The CISO Council should be engaged in policy development across central and local government to ensure compliance with national and international legal obligations. Where public sector security incidents occur, the CISO Council should be involved in independent investigation and reporting.

3. Consolidate existing duplicate information assurance services

The Government CISO should work with OCSIA across government to amalgamate existing policy and solutions branches, including CESG, COSPD and the relevant parts of MoD, into OCSIA. This by implication will require consolidation of duplicated services and roles. The newly-amalgamated security body should take responsibility for all aspects of establishing standards and procedures for the defence of public sector ICT infrastructure, and should develop and publish security standards for use in government and the private sector. The body should also operate ‘How To’ teams of experts who look for cost-effective solutions to security problems, and constantly improve the advice and controls available to government, drawing upon the best the private sector has to offer.

4. Ease the administrative security regime for lower-value data

The UK government operates a protective marking policy for its data assets to ensure that they are used and secured in accordance with the value of those assets. Clearly some of that information - particularly when assigned a Top Secret or Secret marking - requires very robust security controls. But the vast majority of data, particularly outside of Whitehall, sits at Restricted or even lower, and the nature of the data is not dissimilar to that which might be held by a private company such as a bank. Yet that information is subject to ‘special’ information assurance controls that are often significantly more onerous and administratively complicated than might be found in the private sector, despite those controls having their roots in the same standards.

If OCSIA were to relax the administrative processes for securing Restricted data, such that authorities may use any commercial services or products so long as they comply with the basic security policy requirements defined in the Security Policy Framework and supporting materials, then the market would be opened up for any commercial product or service vendor to compete in the public sector. Rules will need to remain in place to ensure that data is correctly marked, and not ‘upgraded’ to higher protective marking levels than appropriate. Implemented correctly, this change would not result in chaos or insecurity, but instead allow greater competition to bring down the cost of delivery, and free up local authorities to get on with securing their information without the burden of complying with security frameworks that are intended to deal with data at much higher levels of security. CLAS consultants could shift their focus to systems operating at the higher protective marking levels.

5. Sort out the existing mess of unaccredited Whitehall systems

The imperative for information assurance reform does not apply solely to new systems: there is little value in securing new ICT infrastructure if it has to interface with older systems with unproven levels of security. The government’s own report into data losses (the Hannigan report) identified approximately 2,300 government systems that have not been subject to any form of assurance certification (known as accreditation) but made no demands that those legacy systems should be secured. Significantly, recent changes to the Security Policy Framework introduced a new marking level of Protect which was, in part, intended to ease the burden of accreditation, but anecdotal evidence suggests that it has the opposite effect, and is instead seen as a new, unfunded administrative burden. This needs to be addressed as a matter of urgency: those systems must either be secured or scrapped.

Equally importantly, senior civil servants still have the power to over-ride the need for accreditation if they so choose: in other words, to disregard security requirements if these are too expensive or likely to take too long. This exemption must also be dropped: if system security is too expensive, then the system itself is too expensive, and other more affordable ways must be found to deliver the same outcomes.

6. Voluntarily accredit open source software where appropriate

If the current approach to accreditation remains in place, then government must take responsibility for accrediting testing and maintaining the security of open source software. There is no reason why OCSIA, or even a private company, could not provide and maintain its own secure builds of the likes of Linux, OpenOffice or OpenSQL for use in government. Builds and patches would be checked and tested by a central team without the need for a vendor to sponsor the work, thus making the software available across government, and saving costs on software licensing and duplicated testing.

7. Develop the information assurance profession

If the government is to obtain access to the best possible security expertise, then the profession needs major reform. OCSIA should take responsibility for development of the information assurance profession, working in close partnership with relevant information security professional bodies. This will include:

        defining a career structure for information assurance in the public sector;

        developing information assurance professional development and training syllabuses for delivery by commercial organisations;

        providing examination and certification of government security professionals, with an emphasis on facilitating simple and affordable cross-qualification from the private sector, so as to expand the pool of professionals available to government;

        governing the certification and management of inspectors and accreditors;

        maintaining a pool of expert instructors and project managers to coach and where necessary manage particularly large, innovative or sensitive public-sector projects.

CESG has already taken steps down this route by moving aspects of the professional qualifications across to the Institute of Information Security Professionals (IISP). If it were to go a little further and insist that all government information assurance professionals must become IISP members who maintain Continuing Professional Development (CPD) training, and could be struck off for malpractice, unprofessional conduct or incompetence, then there would be a case to argue for abandoning the overhead of CLAS altogether.

More for Less from the Department of ‘Yes’

In the world of the Department of ‘Yes,’ information assurance will be a service enabler. Public authorities will have the confidence to adopt innovative new technology schemes, knowing that they will be supported in doing so by their information assurance teams. They will be look to their information assurance groups for support at the earliest stages in projects, rather than trying to hide from them. They will understand that on the rare occasions that their security advisers say ‘No,’ there is a good reason for them to do so.

If we want an information assurance function that really supports public authorities, and that can deliver more for less, then these changes are cheap and easily done. We simply have to ask OCSIA to reform the information assurance function, give that office the power to do so, and support it when it encounters inevitable resistance from within the security establishment. All it takes is the will to say ‘Yes.’


9781849540445.jpg






Change of Identity

| No Comments
| More

Those of you who follow NO2ID, arguably the most successful civil society pressure group of the past generation, may be aware that National Coordinator Phil Booth has just stepped down from the role after six years leading the organisation.

Phil's quite a remarkable individual, both physically and intellectually a very big guy, and has achieved many remarkable things in his time with NO2ID. He grew the group's membership into one of the largest and well-connected lobby groups in the country; established a personal network of peers, politicians, civil servants, technology experts, industry leaders and academics; and successfully beat down one of the last government's cornerstone manifesto commitments with just a tiny budget and his own undrainable energy reserves.

What's most remarkable about Phil has been his ability to engage across the entire spectrum throughout that time. He recognised the need to work with everyone from the ministers pushing the programme, through the suppliers pushing the technology, to the hard core of ID Card opponents who pledged civil disobedience rather than compliance. He remained courteous and focussed even at times when the government was engaged in some very underhand tactics to destabilise both his, and NO2ID's, position.

Of course Phil would be mortified to be solely credited with NO2ID's success, and the power and passion of that body has to be applauded, but there's little doubt that he has been instrumental in getting us to where we are today. I very much hope that once he's taken a break we'll see him back in the ID space, perhaps this time designing the new citizen-centric, privacy friendly authentication schemes that will emerge from Whitehall over the next few years?

What's in a name?

| 1 Comment
| More

Quite a lot actually, particularly in the world of social media. The popularity of Facebook, Twitter etc is very much driven by their flexibility in extending our real-world lives into the virtual in whatever manner we wish, including allowing us to completely reinvent - or fabricate - ourselves online.

The BBC reports on the rather odd case of Facebook allegedly taking down a user's account because she was 'impersonating' Kate Middleton. She wasn't doing that, she just happens to be called Kate Middleton, and I'm sure there are plenty of other Kates out there who share that surname. It's unusual because in most cases, social media sites leave it to users to sort out name ownership amongst themselves, except where there is a clear criminal intent to defraud or mislead.

Our problem is that the glue that binds online personae to their friends/followers/acolytes is their name: it is the primary identifier for the account, and often the tool against which friends may search for each other. For example, I have three social networking accounts: a Facebook profile which I use mainly for social purposes, a Twitter account that is largely focussed on my professional network, and a second Twitter account in which I take on the persona of an entirely fictional character. Annoyingly, the fictional character has more followers than I do, but that's probably because he's much more interesting than I am, and has some very interesting fictional friends.

We have invented a social media world that reflects the simplest of our identifying conventions from the real world. Just like the real world, we can be pseudonymous. After all, a name is not a fixed attribute, and an individual can have multiple names and change those whenever they wish. That may be fine for social media applications, but it's not good enough for a broader ID system, except possibly as a selector that allows an individual to point to the attributes that they wish to associate with a particular transaction or relationship.

Whilst our chosen identifiers are not unique, and whilst we continue to use contextual, changing identifiers such as names as public identifiers, this problem will continue. Names also provide a simple way for third parties to track us across multiple accounts, or to incorrectly assume that individuals who share a name are one and the same, and that is a key privacy weakness. We need the option to use meaningless but unique identifiers that prevent that tracking but ensure that we can uniquely identify ourselves when we wish to do so. More on that in another article.

In the meantime, I'm pleased to see that the top handful of hits against my name in Google report on my many acting successes, my distillery and US real estate business. Maybe I am as interesting as my fictional persona after all?

The State of the Electronic Identity Market

| No Comments
| More

The European Commission's Institute for Prospective Technological Studies (IPTS) has published a report on 'The State of the Electronic Identity Market: Technologies, Infrastructure, Services and Policies.' I co-authored the report together with teams from IPTS and Consult Hyperion, with the objective of exploring where individuals' identity data are converted into credentials for access to services.

The document concludes that the market for electronic ID is immature. It claims that the potentially great added value of eID technologies in enabling the Digital Economy has not yet been fulfilled, and fresh efforts are needed to build identification and authentication systems that people can live with, trust and use. The study finds that usability, minimum disclosure and portability, essential features of future systems, are at the margin of the market and cross-country, cross-sector eID systems for business and public service are only in their infancy.

This was a particularly tough document to write, since the scope of ID is potentially so large, yet there are so many confused and conflicting concepts, terminologies and delivery approaches. Qualitative data about the value of ID services is almost non-existent, and tends to focus principally upon enterprise identity management technologies. At the time we wrote the document, the UK was gripped by the inertia and non-delivery of the failing National Identity Service, and the impact of that is reflected in the document.

The report is available for free and can be downloaded here.

Private Lives in a Database World

| 3 Comments
| More

[I was kindly invited to respond to a speech delivered by former Information Commissioner Richard Thomas CBE at a dinner at the ICAEW. The following is the text of that response]

In 1890, Samuel Warren and Louis Brandeis famously described privacy as “the right to be let alone.” For over a century since then, society has developed legal, technical and social frameworks that protected a concept of alone-ness, of isolation, of keeping others away from the individual and information about that individual. Our concept of privacy has become one of ‘urban anonymity:’ we believe we have some degree of anonymity when we are in public, since if nobody knows who we are, then our actions cannot have consequences since we can’t be identified.

But Richard has described how the emergence of the Internet has stood that idea on its head in the past ten years. The explosion of data, of access to that data, of tools to search, filter, analyse, interrogate, present and disseminate that data, placed in the hands of government, companies and individuals have stripped away that veneer of anonymity and created a dystopia in which our privacy is fading, not because of our failure to control privacy, but because privacy itself has changed, and the old controls are no longer able to contain or to manage the ways in which we share information with others. Nor has this erosion been gradual: great swathes of our privacy have been cut away by tragic catalyst events such as the killings of Jamie Bulger, Holly Wells and Jessica Chapman, Baby P; the attacks on the World Trade Centre and London’s transport system.

Privacy is no longer about keeping our personal information secret, but is instead about controlling how it is used. And unless we can enforce that control, the only possible outcome for our society is total transparency: a world in which nobody has any secrets at all, and individuals have no meaningful control over how those secrets are used. Nothing is ignored, nothing is forgotten, nothing is forgiven. That is the surveillance society which four years ago Richard warned the government we will sleepwalk into if we continue down this path.

There is still hope: during his tenure as Information Commissioner, Richard recognised the critical need not to prevent access to information – something which is now impossible, as Wikileaks have shown the world’s governments – but to render individuals, organisations and governments accountable for how that information is used. This evening he has described how the legal approach to accountability can work. But I would argue that if we continue to rely solely upon regulation to enforce that accountability, then we will never win, since there will always be those corporations – and in particular global ones - who choose to operate above the law, and Richard’s successor has discovered just how difficult it can be to fight the corporate spin machine.

True accountability must depend upon mathematics, not who has the best lawyers. As consumers, we must demand that privacy controls are coded into every aspect of our online world, so that we regain control of our information. It is consumers, not corporates and governments, who should dictate what is collected, processed, stored, disseminated, derived and deleted. And this can only happen when we have delivered the technical, as well as the regulatory, demands of Privacy by Design.

And that accountability will, ironically depend upon us delivering a truly effective population-scale identification and authentication system – not the control-freakery daydream that is thankfully now being struck from the statute books, but a proportionate, federated, privacy-enabling infrastructure that will provide the cryptographic roots of true information accountability. Individuals will be able to control how their information is used and by whom, and to easily identify and prove when misuse has occurred. In fact in a utopia where the cryptographers rule, I’m sorry to say for Richard that there might even be no further need for lawyers, or even an information commissioner.

But for now we have to live in reality, and that reality needs the rules and regulators that Richard has described. What I hope we can discuss now are the implications of his ideas for us as individuals, organisations and professionals, and how we can move forward from our imperfect present to a pretty good – if not actually perfect – future for privacy.

Talking balls on Facebook

| 1 Comment
| More

The NHS Choices website is a cornerstone of the government's drive for health service efficiency and to move service delivery online. Users can log on to find out more about NHS services, and to use a symptoms checker to understand what might be wrong with them and (hopefully) seek medical attention where appropriate, or save a doctor's time if their condition turns out to be nothing more than a cold. The site has made an effort to engage with social networking sites, such as integrating the Facebook 'Like' button. And as Mischa Tuffield of Garlk has spotted, this is where we get a big privacy FAIL.

Mischa points out that a visit to a NHS Choices conditions page calls on four external service providers:

Host: l.addthiscdn.com

Host: statse.webtrendslive.com

Host: www.facebook.com

Host: www.google-analytics.com

Two of these - Google Analytics and Webtrends - are used to monitor web traffic. In theory the privacy implications are relatively minor, although in certain scenarios it should be possible to identify an individual user subject to access to other information. It's odd that the NHS has chosen to use third-party analytic services rather than implementing their own. This problem has been explored in detail elsewhere, so I won't dwell on it here.

However, the Facebook and Addthiscdn links are there to drive the Facebook 'like' service, and this is where our problems begin. If a user visits the page from a browser that they've used to access Facebook before, then Facebook automatically gets to know that they've been to that particular conditions page. That means that if someone is concerned about a particular condition - let's say testicular cancer - then if they've been to Facebook before, then Facebook gets to find out about that interest. Not good. And it gets worse - let's say that the user feels they've received useful information, and clicks on the 'Like' button (or does so accidentally) - then it shows on their Facebook profile, and that's really not good at all. Imagine being worried you have a serious illness that you don't want to worry your spouse about, and accidentally clicking 'Like' - they get to find out. So does a potential or current employer if they're checking your profile. The consequences could be very significant indeed.

I'm really quite shocked that NHS Choices has allowed this to happen, and more importantly that they have clearly failed to apply any form of effective Privacy Impact Assessment to how they deliver health information. If they do wish to connect to Facebook or analytics engines, then they should be making it an explicit 'opt-in' for the user before any information is shared at all. The NHS' privacy policy has completely outsourced the problem to Facebook, so that users are left in the dark about the consequences of this functionality.

I'd like to hope that Mischa's research will force the NHS to modify the website, and that at the very least the functionality will be suspended until the privacy issues have been properly investigated.

[Thanks to Ian for pointing this one out]

Disclaimer

The views expressed in this blog are my own, and do not necessarily reflect those of any client or other organisation.

Subscribe to blog feed

Archives

Categories

Toby on Twitter

    Recent Comments

    Andrew Watso on How I learned to stop wor... : A well-argued piece. Here's a very short history ...

     

    -- Advertisement --