So I read with some interest a tech note from Google on the impeding use of a transforming proxy server for Chrome for IOS & Android. The idea is to speed up the Web on mobile devices where network bandwidth is constrained. If I understand correctly, the proxy will first of all route all http traffic over a single #SPDY connection. It will compress images to WebP on the fly, minify code and move DNS resolution to the proxy. All of this holds the possibility to greatly increase browsing performance – essentially giving you Opera-Mini like performance but with a “full web” experience. I’m excited about that. I’m also glad that they’ve explicitly excluded https traffic from the proxy – although increasingly more and more services are redirecting users to https versions of their pages by default (including, amusingly, the page on developers.google.com that describes the data compression proxy).
I do have a few questions that don’t seem to have been addressed in the brief. First of all, what options do I, as a content developer, have to deactivate the features such as image compression? One example of where I might want to do this is if I am trying to transfer a file (rather than display an image) or I want to make sure that an image is sent at its highest resolution and clarity – for example, if I am trying to enable a doctor to examine an X-Ray image).
A few years ago, the Mobile Web Best Practices working group developed a set of Guidelines for Web Content Transformation – http://www.w3.org/TR/ct-guidelines/ (with some input from Google as well as from other players in the compressing proxy landscape, such as Novarra, which has since been acquired by Nokia and incorporated into their Asha product). This document could best be described as an attempt to re-educate implementers about some of the already existing features of http that they should be using to be more transparent to Web developers when they develop software that gets into the middle of the http transaction, especially when it attempts to meddle with the contents of what is being delivered to the client, or what is being communicated back to the server. Unfortunately, this document never got beyond the Note stage, but I think it’s a none-the-less very instructive.
I wonder if the implementers of this new Google Data Compression Proxy for Chrome on IOS and Android have read it? If not, I suggest that they do. If there is one thing I learned back then, it was to tread lightly when attempting to “optimize” Web content in the network: http://techcrunch.com/2007/09/21/vodafone-in-mobile-web-storm/
Can We Stop the Tracking Already?
I cannot believe this nonsense is still going on. I had to check my watch – yes indeed, it's now 2013, and we still don't have a viable do-not-track specification. This should have been one of the simplest pieces of work that W3C has ever engaged in. Instead it has been drawn out into an ever-deepening vortex of conflicting interests, back-stabbing and bad-faith behavior from which there seemingly is no escape. Do-not-track preferences are now built into all major browsers so many consumers might thing this is a solved issue – it's not, because nobody can agree what "do not track means." I say "nobody" but what I mean is that advertisers don't agree. Pretty much everyone else agrees – it means "do not track." Advertisers seem to think it should mean "go ahead and track" but "don't show me targeted ads so I don't feel like I'm being tracked." Some advertisers who have joined W3C and joined the Tracking Protection working group have done so with the explicit, cynical goal of torpedoing do-no-track. The question advertisers need to ask themselves is: what are they so afraid of? Surely, if Web users find advertising-supported sites and targeted context-aware advertising so useful, then they will be happy to have their Web surfing tracked for the purposes of targeting this advertising and providing these services. If users feel they are getting a fair shake for the information they are providing to advertisers, they should not then object to being tracked. Or is it possible that the advertising community can only exist by tricking users into providing information that they would not knowingly provide and then reselling this information in ways which the user would not agree to?
I hope the working group chairs and W3C team members involved can provide some leadership and pull this group back from the brink of irrelevance. We need a do-not-track standard that stands up for user privacy on the Web.
#donottrack #privacy #w3c #blogthis
This great article neatly skewers Apple’s claim that iMessages are encrypted “so no one but the sender and receiver can see ore read them.” This was claim I as immediately sceptical about when it was made so it’s nice to have some expert opinion backing up that scepticism.
Overall a very good program of talks this morning at +LeWeb London focusing on the "Sharing Economy." I think the idea of the sharing economy aligns well with the core values of the Web. But one thing no speaker has addressed yet is how to deal with "bad actors" in the context of the sharing economy. How can I participate in this sharing economy and avoid being phished, or spammed, or pwned? How can I participate in the sharing economy and also maintain my privacy? How can we stop airbnb becoming a micro-culture like eBay that is impenetrable and hostile to newcomers? How can we use the transparency of the Web to combat our own darker natures? #LeWeb #SharingEconomy #blogthis
One take-away from last week's Mobilism conference that I did not get to ruminate on during +Jeremy Keith's fine panel was just the bare fact that responsive design has arrived. Last year's Mobilism was full of pitches for responsive design and explanations of why responsive design was a good idea. This year's conference speakers mostly started from a base assumption: we are designing responsively. Now what? How do we do it? What best practices should we use? What anti-paterns exist? How does it apply to images, to animation, to touch, etc…? For those in the Web design community this may be old news, but I think it's notable that we've had that shift, from justification to implementation of responsive, in the last year.
I think this is more evidence for what I've been saying for the past few months: the "Mobile Web" is no longer a thing. That might sound strange coming from someone who helped to develop the W3C Mobile Web Best Practices, but where we once said "mobile Web" we now need to be saying "responsive design" and we need to be thinking about a much wider range of devices and input / output modalities than simply mobile phones. (For example, gaming consoles, as +Anna Debenham pointed out in her Mobilism presentation.) Simultaneously we need to realize that the Web is a mobile medium – by some counts, a majority if Web usage is now happening from devices we are counting as "mobile."
I don't get how so many people can be so vehemently opposed to the QR code, and not only that, but that people somehow view the QR code as being a weapon of "marketing." In my mind, the QR code is about openness. You may not like the looks of it, but it is a "democratizing" technology. It's open – anyone can create one – and it can point to a URL, which is itself an open pointer to anywhere on the Web. In contrast, other similar mechanisms (e.g. NFC) are usually closed and proprietary in nature. It actually reminds me of the equally misguided negative reaction to the URL itself in the early days of the Web.
Paging +Terence Eden.
Glad to see that this document I had worked on during my elected term on the TAG (with +Jeni Tennison, +Ashok Malhotra and +Larry Masinter) has been published. This document is trying to clarify some issues around Web publishing and linking that seem to keep cropping up in legal and policy discussions. In the process, it offers up some (hopefully) easy-to-understand definitions of pieces of Web technology. Although my proposed language on enshrining a "right to link" doesn't seem to have made it into the final draft, I think it's still a good piece of work. #blogthis
This weekend I built a simple temperature sensor with #arduino and got it sending information to #cosm via the #gsm shield, using the #bluevia SIM for data. Even after a year working on this project, this was actually the first time I was able to test the whole thing end to end myself, as a user would do (including purchasing the shield and the Arduino kit itself through online store, activating the SIM and adding balance to it, etc…). The results can be seen below and here: https://cosm.com/feeds/121725 where you can get an updated feed and graph of the temperature in my living room. It’s a pretty simple project but especially since I have been swanning around London telling people how easy it would be to build a connected temperature sensor with this shield, it was gratifying to see that I was right. :) The project uses the Cosm libraries and is built on top of the Cosm example code, but uses the GSM libraries instead of the Ethernet shield ones. In putting it together I realized one of the differences between writing an IoT application for the GSM shield (as opposed to Ethernet or Wifi) will be keeping data volume to a minimum. Also the Cosm example code just activates the network and keeps it active even when the Arduino is just sitting idle, whereas on GSM you’d want to connect and disconnect, especially if you are on battery power.
So I read this article with some interest this morning on the Tube. Reading it requires subscription or sign-up (it was one of my 8 free articles a month – I’m not sufficiently motivated to subscribe, I’m afraid, though I’m a big fan of the FT WebApp.) For those non-subscribers, allow me to summarize one of the key points: forced unbundling of telecoms services in the EU (e.g. making companies like BT rent out their infrastructure to competitors at a regulated rate) has brought about great benefits for consumers (e.g. lower prices and more choice for broadband) but at the cost of these companies’ cash flow compared to their US counterparts who are not subject to this kind of regulation. AND (here’s the key part) therefore European telecoms companies have had less money to invest in network upgrades.
Now – at the risk of channeling Cartwright from Time Bandits – why, if that’s the case, do I perceive that we have much faster broadband Internet speeds and choice of providers available in Europe than are generally available in US. In London, I have 80 megabit downlink / 15 megabit uplink to my house! And although I am a BT customer I could choose from a number of service providers who (though the magic of local loop unbundling) could provide similar services across the same wires. This is because BT have been investing in rolling out “fibre to the cabinet” (fttc) and fibre to the premises (fttp) technologies and then turning around and leasing that capacity through their wholesale division. My parents in New Haven, Connecticut (not exactly a technological backwater), meanwhile, are stuck with 1.5 megabit downlink from one monopoly provider as the only DSL option available to them.
So what’s going on here? Is there really a tidal wave of broadband innovation happening in the US and I just don’t perceive it? From where I’m standing, the regulated unbundling seems to have worked well for both for competition and innovation in the broadband space. Am I missing something?
[see the comments thread on Google+]
Disruption isn't always good and innovation doesn't always make the world better. I am a big fan of dystopian visions of future technology – currently typified by the wonderful "Black Mirror" series on Channel 4 in the UK. (If you had an implant that recorded every moment of your life for you to relive at your leisure, would that really be a good thing? Cf. Google Glass.) I think it's our role as technologists, strategists, architects, product designers, and so on to help steer us away from these kinds of dystopian scenarios. Just because something can be done does not mean it necessarily should be done, and just because someone poses an objection to or points out a risk of a new technology, that does not make that person a luddite.
The amazing image (http://www.businessinsider.com/vatican-square-2005-and-2013-2013-3) shared across social media of the difference between the crowd in front of the Papal conclave in 2005 vs 2013 (everyone in the 2013 shot is holding up a phone) is stunning and looks like it was ripped from a Black Mirror episode. Is it better that people are increasingly experiencing the world around them second hand, through a lens and a screen?