Monday, January 30, 2012

Jonathan Franzen Is Wrong: Ebooks Are Good for Everyone

I love books, especially old ones. Recently I learned a simple dinner table trick from a 110-year-old magic book. It involves picking up a napkin with both hands and, without releasing either corner, tying the napkin into a knot. Good fun and likely unavailable anywhere but in this very old tome. That fact, though, does not make me love ebooks any less or think that they somehow are a better long-term solution for the reading public. Celebrated author Jonathan Franzen thinks otherwise.

Author of The Corrections and Freedom, Franzen is revered for his prose, attention to detail and ear for the inner lives of sometimes desperate (and or depressed) people. I think his writing is excellent, but he is way off base on ebooks.

In a speech given during Britain’s Hay Festival in Cartagena, Columbia, Franzen said he prefers paper technology. “I can spill water on it and it would still work!” said Franzen, according to a report in The Telegraph. Franzen was also fixated on the idea that ebooks can or will change over time. Print is permanent and Franzen apparently prizes that permanence. For the record, Franzen’s books are available as ebooks.

I have no idea why Franzen assumes that publishers and authors are changing their books for the e-editions. With the exception of no longer knowing exact page numbers, I don’t see anyone changing their books for the Amazon Kindle, Kindle Fire, Barnes & Noble Nook, or the Apple iPad and iBooks. An ebook reader is just a new delivery mechanism for literature.

To make matters worse, Franzen throws “capitalists” into the mix. They hate print books, he said, because these physical books will continue to work 10 years from now. “It’s a bad business model,” noted Franzen. I think capitalists like any kind of book they can sell you in mass quantities. I don’t think they love ebooks more because they won’t last as long (or at least the platforms they’re on won’t). My guess is that capitalists appreciate the speed with which you can get an ebook to market and the enhanced opportunities for broad distribution. Think about it: Millions of Steve Jobs bio books (one of ebook’s top three sellers in 2011) were delivered to readers, and I bet a vast portion of them did not ride on trucks. They sped through the air from, say, Amazon’s servers to millions of Kindles around the world.

If Franzen is interested in permanence, shouldn’t he cheer the fact that people are now reading books, but not hacking down the world’s trees to make them or sending carbon-monoxide-producing global warming-promoting 18-wheelers around the county to deliver thousands and thousands of physical books (though, to be fair, this is still happening, too)?

No, Franzen is fixated on the idea of physical books. I partially agree with him: Printed books are a powerful, romantic idea. As noted, I love them and have a rather large collection of both large format books and many from the late 1800s and early 1900s. It’s not unusual for me to pull one of the shelf and start thumbing through it, just to look at the old engravings or marvel at the notes scribbled near the binding by, perhaps, the first owner. That romantic ideal, however, doesn’t make me want to read stop reading ebooks on my Kindle, iPad or iPhone.

Oddly, my own 13-year-old daughter is a little like Franzen. She has never read an ebook. I blame J.K. Rowling. Until recently, the author refused to offer the Harry Potter series as ebooks. So my daughter’s only choice has been to lug around each increasingly larger volume in the series. She’s on the final tome now and it is a monster. Yet, she insists she never wants to own a Kindle. She says she simply loves books too much and cannot imagine a time when they’re gone.

I hate to tell her this, but ebooks are the future. They’re cheaper to produce, easier to distribute and, dare I say it, probably promote reading better than your local library. And while Franzen is concerned about ebook versions differing from their real-world counterparts, I’m cheering the emergence of new kinds of ebooks that take the IRL reading experiences to places we scarcely imagined on the printed page. One need only look to interactive children’s books and etextbooks for evidence.

What Franzen fails to realize is that while books are beautiful, permanent things they’re also inconvenient. Years ago you traveled with, maybe, one book and some magazines. You wouldn’t consider taking two big books (maybe two thinner paperbacks). But even if you weren’t traveling, when you finished one book, you needed to head to the library or bookstore to buy another. When I finish an ebook, I simply connect to Kindle’s Whispernet and buy and download a new one. Like most people I know, I read more now with my Kindle than I ever did before.

My reaction to Franzen’s comment was immediate and negative. Not surprisingly, when I posted news of Franzen’s comments on Google+, the tech-savvy audience echoed much of my own sentiments. They derided Franzen for appointing himself “guardian of society” and noted how Franzen’s concerns are not unlike those who feared what the rise of the Guttenberg press would do to publishing.

I will not lie and say that I won’t miss print when it’s gone, but, as Franzen himself predicts, it will be a memory in 50 years. Franzen’s glad he won’t be here to see it. I, on the other hand, hope to live well past my 97th year and to thoroughly enjoy ebooks from now to then and beyond. Maybe Franzen will change his mind and join me.

Do you decry the rise of ebooks and inevitable fall of print? Share your worries (or lack thereof) in the comments.

Bonus: Up Close with the Barnes & Noble Nook Tablet

View As Slideshow »
Barnes & Noble's Nook Demo
Barnes & Noble's Nook Demo
Barnes & Noble's Nook Demo
Barnes & Noble's Nook Demo
Barnes & Noble's Nook Demo
Barnes & Noble's Nook Demo
Barnes & Noble's Nook Demo
Barnes & Noble's Nook Demo

Here's one more argument in favour of ebooks and ranting against paper books. I am divided, there are both sides to the argument. As much as I enjoy ebooks, I think paper based books are not going away any sooner and actually book making and sharing is still important. Having said that, I think he is right on the spot where he writes ebook reader is just a new delivery mechanism for literature.

How to Watch Obama's Google+ Hangout

Google hangouts are great. Here's one more reason to check into your google plus account today and do a hangout session. I think. Here are the instructions for those of you who might use it.

http://mashable.com/2012/01/30/obama-google-hangout/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Mashable+%28Mashable%29

Friday, January 27, 2012

Will ValoBox (Web-powered Books) be the next iTunes for books and text contents?

From their website, "
The content community is you, me and everyone else who produces, reads and shares great content.
ValoBox rewards those who share content with a massive 25% of any sales made to spend on more books.
For every page bought, a 60% royalty is sent directly to the content owner.
We believe that reading should be simple, fun and support good causes. That is why ValoBox sends 15% of its profits to our selected charities from the word go.
So if you like what you read, click share and watch your tweets and blog embeds do some good. You’ll be helping everyone who makes great content as well as being able to buy some great new books!"

http://www.ValoBox.com/

I think the idea is great, and reminds me of how itunes impacted the way people used to listen to music. The premise was that, most people would prefer only one item in a CD or a cassette and then would have the ability to mix and match to their preference. Itunes was geared towards that and this indeed was quite a revolutionary idea how mp3s and other formats were packaged to suit personal choices. 

Is it the same story with books and journals? I think to some extent that might indeed be the story. For instance, I  know that for journals, many of us are interested in one article or a specific section and then would like to grow our own collection (for instance, citeulike (http://www.citeulike.com) or Mendeley (http://www.mendeley.com) are great tools and webapps that let you do exactly that. It'd be great to see how valobox emerges and lets us play with this concept. Money adds a new twist here, I think.

Research is the fourth "R", after reading, 'riting, and 'rithmetic and we need to learn well, here's an argument

Dan Russell writes, 
SearchReSearch 28/01/12 4:08 AM noreply@blogger.com (Daniel M. Russell)

We all know about the three R’s of education—reading, writingand ‘rithmetic.  The three basic skillsthat school have to teach… and which obviously doesn’t include spelling. 

I want to propose that there’s a 4th R we shouldbe considering: RESEARCH. 

If you think about it, learning has changed from aschool-only activity to a life-long activity. And just as advantage accrues to the person who can learn the best andknow the most, so also does the ability to research to the best of yourability. 

As SamuelJohnson said:  "Knowledge is of two kinds, we know a subjectourselves, or we know where we can find information upon it."

While that’strue, but this common version of his quote usually leaves off the rest of thatparagraph:  “...When we enquire into anysubject, the first thing we have to do is to know what books have treated ofit. This leads us to look at catalogues, and at the backs of books in libraries.”   (Boswell's Life of Johnson, 1791)

Inother words, even if you know how to research something, you still need to knowa little bit about the skill of how to search.  In Johnson’s day that meant knowing that cataloguesexisted, that libraries were collections of books on topic of interest, andthat the back of a book contains an index. It also meant that you knew how toget into a library, many of which were still private and by subscription (read,“invitation”) only. 

People fluent in search and retrieval not only savetime,  but are far more likely to findhigher quality, more credible, more useful content.  More importantly, they can ask questions thatwere impossible just a few years ago.  People with these skills are effectively smarter.  

Using Google to do search is easy.  It's been designed that way.  You type something like [New York Times] intoa search box and a moment later you're reading the paper.  If you search for [pizza Mountain View],  you get a list of local pizza places withphone numbers and user reviews.  

Most of the searches that Google sees in a typical day fallinto this simple category where user goal is clear and the results are pretty obviousand unambiguous.  

But a significant number of searches are not.  Searchers might have a goal in mind but theycan’t figure out how to express it in a way that will give them what they want. Sometimes their search is precise, but theydon’t know how to read and interpret the results.  Sometimes I’ll see searchers spending 30minutes searching for something that should take less than 2 minutes. It drivesme crazy as a researcher because I know that the searcher is missing just one small,but critical piece of information.  Wetry to build as much as we can into the search algorithm, but people still needto know a bit about how the web is organized (there’s no index in the back ofthe book) and how search engines crawl, index and respond to their queries.

In a sense, that’s my mission—to help people become betterresearchers, beyond just the basic skill of knowing how to make Googledance.  My goal is to help people understandthe larger issues at play here—how to be a literate person now, and now to becontinually learning how to be literate as changes happen in the future.  This is the idea of meta-literacy—knowing how to be literate about your ownliteracy.  More about this in future posts.  

BOTTOMLINE:  Research is a skill that we all take for granted, yet it’s acritical skill for our future.  As thenature of work and education changes (and that, really, is the only constant wehave), we… as a teaching culture… need to bring our students up to speed onwhat it takes to be good searchers. 

We need to give them the skills of the 4th R—research—and all of the skills andknowledge they need to function effectively as learned searchers. 

What’s more, we’re trying to equip them with skills they canuse not just now, but for every information search problem they confront nowand in the future. 

Search on! 
----

I agree. I think more so, because we are increasingly living in an age where it's important to identify where to locate a piece of information and then instruct a machine to go fetch it for us. This is true for locating a restaurant as  much as it is important for identifying that essential journal article which will advance our knowledge. A good search-ability enables us with time to think of greater and more important things. That then become search worthy. The iterative process continues. 

Thursday, January 26, 2012

How technology hurts us — and how it can make us happier

Very interesting take on our lives as we live in technology and all the distraction of a life lived on the Internet, the likes and LOLs, and paying attention to stuff. He talks about Information Diet in the post which I agree as a wonderful resource for learning how to be digital distraction freedom. A truly reflective post worth reading over again.Sort of a digital Walden Pond manifesto, if one could be.


http://www.theverge.com/web/2012/1/26/2736373/brian-lam-technology-distractio...

What is Nodding Syndrome and how do you investigate if you come across a case series?

In this installment of the CDC MMWR, epidemiologists from CDC describe a great investigation of a cluster of disease known as Nodding Syndrome they received reports of, from South Sudan. The story of their tracking it (using case series and case control studies to get to the heart of it) is very instructive. Nodding syndrome is repetitive nodding of head along with signs of seizure prevalent in parts of Africa. It appears that infection with Oncerca volvulus may play a role in it, but the story of their investigation into the cause of the disease makes for interesting reading.

Wednesday, January 25, 2012

Why AI will eventually drive healthcare, but not anytime soon, or the clinician as a "Go" player

I was reading today Fred Trotter's argument about algorithms and his critcism about a recent piece of article by Vinod Khosla where Khosla argued that in future doctors would be like AI algorithms that run through diagnostic possibilities and use of cell phones to arrive at diagnoses driven by algorithms. Here is a very well argued piece by Trotter where he compares clinicians with Go players and discusses how difficult and premature it is to consider that algorithm driven solutions alone can lead the way to future. 

I'm interested in your views.

O'Reilly Radar - Insight, analysis, and research about emerging technologies. 26/01/12 3:00 AM Fred Trotter Data Programming algorithm doctors healthit healthcare medicine patients

TechCrunch recently published a guest post from Vinod Khosla with the headline "Do We Need Doctors or Algorithms?". Khosla is an investor and engineer, but he is a little outside his depth on some of his conclusions about health IT.

Let me concede and endorse his main point that doctors will become bionic clinicians by teaming with smart algorithms. He is also right that eventually the best doctors will be artificial intelligence (AI) systems — software minds rather than human minds.

That said, I disagree with Khosla on almost all of the details. Khosla has accidentally embraced a perspective that too many engineers and software guys bring to health IT.

Bear with me — I am the guy trying to write the "House M.D." AI algorithms that Khosla wants. It's harder than he thinks because of two main problems that he's not considering: The search space problem and the good data problem.

The search space problem

Any person even reasonably informed about AI knows about Go, an ancient game with simple rules. Those simple rules hide the fact that Go is a very complex game indeed. For a computer, it is much harder to play than chess.

Almost since the dawn of computing, chess was regarded as something that required intelligence and was therefore a good test of AI. In 1997, the world chess champion was beaten by a computer. In the year after, a professional Go player beat the best Go software in the world with a 25 stone handicap. Artificial intelligence experts study Go carefully precisely because it is so hard for computers. The approach that computers take toward being smart — thinking of lots of options really fast — stops working when the number of options skyrockets, and the number of potentially right answers also becomes enormous. Most significantly, Go can always be made more computationally difficult by simply expanding the board.

Make no mistake, the diagnosis and treatment of human illness is like Go. It's not like chess. Khosla is making a classic AI mistake, presuming that because he can discern the rules easily, it means the game is simple. Chess has far more complex rules than Go, but it ends up being a simpler game for computers to play.

To be great at Go, software must learn to ignore possibilities, rather than searching through them. In short, it must develop "Go instincts." The same is true for any software that could claim to be a diagnostician.

How can you tell when software diagnosticians are having search problems? When they cannot tell the difference between all of the "right" answers to a particular problem. The average doctor does not need to be told "could it be Zebra Fever?" by a computer that cannot tell that it should have ignored any zebra-related possibilities because it is not physically located in Africa. (No zebras were harmed in the writing of this article, and I do not believe there is a real disease called Zebra Fever.)

The good data problem

The second problem is the good data problem, which is what I spend most of my time working on.

Almost every time I get over-excited about the Direct Project or other health data exchange progress, my co-author David Uhlman brings me back to earth:

What good is it to have your lab results transferred from hospital A to hospital B using secure SMTP and XML? They are going to re-do the labs anyway because they don't trust the other lab.

While I still have hope for health information exchange in the long term, David is right in the short term. Healthcare data is not remotely solid or trustworthy. A good majority of the time, it is total crap. The reason that doctors insist on having labs done locally is not because they don't trust the competitor's lab; it's more of a "devil that you know" effect. They do not trust their own labs either, but they have a better understanding of how and when their own labs screw up. That is not a good environment for medical AI to blossom.

The simple reality is that doctors have good reason to be dubious about the contents of an EHR record. For lots of reasons, not the least of which is that the codes they are potentially entering there are not diagnostically helpful or valid.

Non-healthcare geeks presume that the dictionaries and ontologies used to encode healthcare data are automatically valid. But in fact, the best assumption is that ontologies consistently lead to dangerous diagnostic practices, as they shepherd clinicians into choosing a label for a condition rather than a true diagnosis. Once a patient's chart has a given label, either for diagnosis or for treatment, it can be very difficult to reassess that patient effectively. There is even a name for this problem: clinical inertia. Clinical inertia is an issue with or without computer software involved, but it is very easy for an ontology of diseases and treatments to make clinical inertia worse. The fact is, medical ontologies must be constantly policed to ensure that they do not make things worse, rather then better.

It simply does not matter how good the AI algorithm is if your healthcare data is both incorrect and described with a faulty healthcare ontology. My personal experiences with health data on a wide scale? It's like having a conversation with a habitual liar who has a speech impediment.

So Khosla is not "wrong" per-se; he's just focused on solving the wrong parts of the problem. As a result, his estimations of when certain things will happen are pretty far off.

I believe that we will not have really good diagnostic software until after the singularity and until after we can ensure that healthcare data is reliable. I actually spend most of my time on the second problem, which is really a sociological problem rather then a technology problem.

Imagine if we had a "House AI" before we were able to feed it reliable data? Ironically it would be very much like the character on TV: constantly annoyed that everyone around him keeps screwing up and getting in his way.

Anyone who has seen the show knows that the House character is constantly trying to convince the other characters that the patients are lying. The reality is that the best diagnosticians typically assume that the chart is lying before they assume that the patient is lying. With notable exceptions, the typical patient is highly motivated to get a good diagnosis and is, therefore, honest. The chart, on the other hand, be it paper or digital, has no motivation whatsoever, and it will happily mix in false lab reports and record inane diagnoses from previous visits.

The average doctor doubts the patient chart but trusts the patient story. For the foreseeable future, that is going to work much better than an algorithmically focused approach.

Eventually, Khosla's version of the future (which is typical of forward-thinking geeks in health IT) will certainly happen, but I think it is still 30 years away. The technology will be ready far earlier. Our screwed up incentive systems and backward corporate politics will be holding us back. I hardly have to make this argument, however, since Hugo Campos recently made it so well.

Eventually, people will get better care from AI. For now, we should keep the algorithms focused on the data that we know is good and keep the doctors focused on the patients. We should be worried about making patient data accurate and reliable.

I promise you we will have the AI problem finished long before we have healthcare data that is reliable enough to train it.

Until that happens, imagine how Watson would have performed on "Jeopardy" if it had been trained on "Lord of the Rings" and "The Cat in the Hat" instead of encyclopedias. Until we have healthcare data that is more reliable than "The Cat in the Hat," I will keep my doctor, and you can keep your algorithms, thank you very much.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

Monday, January 23, 2012

Obama to Take Live Questions on Google+ Hangout Next Week

Now, that's something!

AllThingsD 24/01/12 5:07 AM Liz Gannes News Social Barack Obama Google Google Hangout YouTube

U.S. President Barack Obama will participate in an interview with YouTube users on Jan. 30, as he has done before. What’s different is that some of those questions will be asked live, via a Google+ Hangout. Would-be interviewers (who might be live, but will surely be tightly scripted) can submit questions via YouTube.

Sunday, January 22, 2012

Rough Type: Nicholas Carr's Blog: The Summers' Tale

In his blog (http://www.roughtype.com/archives/2012/01/the_summers_tal.php), Nicholar Carr writes,

But this idea that knowledge can be separated from facts - that we can know without knowing - really needs to be challenged before it gains any further currency. It's wonderful beyond words that we humans can look things up, whether in books or from the web, but that doesn't mean that the contents of our memory doesn't matter. Understanding comes from context, and context comes from knowing stuff. Facts become most meaningful when, thanks to the miracle of memory, we weave them together in our minds into something much greater: personal knowledge and, if we're lucky, wisdom."

Greatly liked this observation and wanted to share. Availability of ready access to facts and figures brings us to the verge of generating new knowledge and wisdom, .... only if we have the vision to grow.

Saturday, January 21, 2012

I think genomics-based EHR is a realistic expectation for future EMR designs but also needs to include enviromics at some point

Bruce Friemdan writes in Lab Soft News,"

Although I generally agree with what John Lynn posts on his blog over at EMR and EHR, one of his recent posts caused me to wince a little bit (see: Genomics Based EHR). He raises the issue of the "smart EMR" with genomic data as one its "core elements". Here's his note:

Genomics is one of the core elements that I think a “Smart EMR” will be required to have in the future. I really feel that the future of patient care will require some sort of interaction with genomic data and that will only be able to be done with a computer and likely an EHR....As I think about genomics interacting with EHR data and the benefits that could provide healthcare going forward, I realize that at some point doctors won’t have any choice but to adopt an EHR software. It will eventually be like a doctor saying they don’t want to use a blood pressure cuff since they don’t like technology.

First, some history. I was part of a panel discussion with one of the pioneers of the HIS (hospital information system), a precursor to the EMR/EHR, more than two decades ago. He predicted that the LIS would soon disappear, along with other "ancillary systems," and be replaced by a single, monolithic hospital-based information system. LISs have certainly not gone away during these years away and have now been joined by RISs, PACSs, CVISs, and other specialized clinical information systems. In fact, they have persisted and evolved because their functionality was required by physicians. There is no question that the EMR/EHR will get smarter as rules-based logic becomes a more important part of its repertoire. However, I believe that its role vis-a-vis genomic data will be mainly as a reporting engine.

Along these same lines, I also have significant doubts that the LIS will be the primary storage and analytic engine for genomic data. If not the LIS, then what will replace it within the hospital IT environment. In a previous post, I suggested that the -omics cloud will be required to provide  the analysis, storage, and reporting of genomic data (see: The -Omics Cloud: A Healthcare IT Solution Already Developed for Genomics Research). I copy from that note the conclusions I reached in this earlier communication which I continue to think are valid:

  • We are at an evolutionary dead-end with our current EMR technology. Hospitals will happily pay tens, if not hundreds, of millions of dollars to use old technology designed to mimic the traditional paper medical record.
  • All of these EMRs are transaction-based with little "intelligence" and even less ability to integrate and analyze the deluge of patient data generated by the EMRs and their "feeder" systems such as the LIS.
  • LISs are highly functional -- pathology and the clinical labs would be unable to function for a day without them. Nevertheless, LISs vendors are not keeping up with the development of modules that can support the exploding -omics science.
  • It is highly unlikely that many of the incumbent EMR and LIS vendors will be able to jump to a new generation of analytic healthcare information systems. They are too tied to their current technology, current business models, and their need to recover their sunk development costs.
  • In my opinion, most of the progress in cutting-edge healthcare computing will be generated in academic research labs, [and] non-profit biomedical research institutes....

So where does all of this leave us?  My view for the future of healthcare IT is that it will evolve into a network of highly specialized servers, some in the cloud, with hospital physicians interacting primarily with the EMR for reporting purposes. The field of genomics is moving so rapidly that an EMR vendor would be unable to keep abreast of the underlying science necessary to support workers in the field. So what will be the role and function of the "smart EMR." It will serve mainly to manage the deluge of data fed to it by the network of hospital computers, including the diagnostic systems, and present them in an orderly fashion to the physicians responsible for optimal patient care."

I agree. I also think in designing a robust, life-course based electronic medical record that takes into account the various aspects of a person's life, focusing on genomics alone leaving out the other "omics", the enviromics (where one studies the combined impact of the ecology and environmental, epigenetic or not impact on gene expressions) should also be incorporated in the future designs. 

Why health care news readers need an “information diet”

Health News Review writes,

"
Health News Review 21/01/12 5:06 AM Gary Schwitzer General journalism issues Health care journalism

As we close out the week and prepare to head to a beach for a desperately-needed mid-winter break, here are some catch-up items we meant to write about earlier.

  • NPR interview with author of The Information Diet making the case for “conscious consumption of news and information.”  We certainly make that case for health news and information – which often floods a thirsty public with a firehose of information when all they want and need is a sip of balanced, unbiased, complete information.  Excerpt:

“The question is, can we make enough people go: ‘Hey, you know what? I’m done. I’m done with the sensationalism of media. I’m done being taken advantage of by media companies so that I can have ads sold to me.’ … If we want to make media better, then we’ve got to start consuming better media.”

  • ... and this,
  • This is really getting old since we’re a month deep into 2012, but among the Nieman Journalism Lab’s predictions for journalism for 2012  was this one:
    • “News will increasingly be a conversation rather than a series of stories. In 2012, the divide will grow between journalists who are intently aware of and responsive to the needs of their communities and those who continue to make decisions based on long-ago-learned fortress mentalities. I wish I could say I were optimistic about crumbling fortresses. Instead, I’ll say that I’ll be on the lookout for examples of news presented as an ongoing, topical conversation rather than a series of journalist-driven stories. In an election year, being responsive to users’ actual information needs and being a part of a community’s conversation is more crucial than ever.”


I recently read "information diet". It's an excellent book on information management at an age when we are literally being "bamboozled" with more information than we can possibly do with, and presents a series of very sensible strategies to work with. One of the most important points that the book raises is to go through to the origin of the source for any news or analysis you get to see and objectively analyze the claims or the thesis of any work you come across, online or not. I think it's a great message for a book such as this, and pertinent when it comes to health news consumption. It's important to be skeptic and question the claims of any health related article and have an open mind.

Friday, January 20, 2012

Our Sea Turtle


We are prize winners in the International Sandsculpting Competition!

We just won the second prize in the inaugural Christchurch International Sandsculpting Competition in the "sea creatures" category. Below is our entry to the competition; we took four hours to construct this sea turtles. It was a great competition, and we had leading professionals sand sculptors from the US participating as well. Here are the photos. We are the MaurineMakers!


The Finished Product: our sea turtle, front face

We started with soft stacking the sand; it took several buckets of sea water to hold and pack the sand in, and some serious sculpting to get to the shape with trowels and shovels. Part of the face was sculpted with kitchen knife and with guidance from professionals life Leonard from IB Posse.










Posted by Picasa

Wednesday, January 18, 2012

Top Environmental Stories of 2011, Christchurch is glaringly missing from the list.

Here's a summary of the top environmental stories of 2011. Coming out this late in January of the new year, I am a little surprised that the Christchurch earthquake did not feature in one of the top environmental stories of the year, since this was a very disruptive year in the life of the city. More importantly, earthquake had major environmental impacts in disrupting air quality and health of the citizens particularly mental health. Between September of 2010 and now, the city weathered thousands of aftershocks and speaks about the resilience of the citizens of this small city, but I thought it would deserve an honorable mention in the story.

Enviroblog 22/12/11 6:06 AM Environmental Journalism

By Nils Bruzelius, EWG Executive Editor

Manzanita coast.jpg

People are messy. So is nature. And what people do when nature unleashes its fury often makes things worse.

The staff at Environmental Working Group took a look at the major environmental news stories of the year and came up with two lists: the Top 10 Good News storie and the Top 10 Bad News stories.

Since environmentalism is mostly about limiting or preventing the harms done to people's and the planet's health by careless human activity, it's hardly surprising that all but one of the "good news stories" involved doing something about problems that we humans created. The only exceptions involved contaminants that can come from both natural and man-made sources.

The message, once again, is that we are our own worst enemy. Good news comes when we do something to clean up our messes. Bad news comes when we create brand new environmental harms or risks, or just plain fail to address the ones already out there - even when we recognize the threat.

By a wide margin, EWG staffers said that the two top bad news stories of the year were President Obama's decision to kill the Environmental Protection Agency's latest effort to reduce the health threat from smog and the nuclear disaster that erupted in Japan when an unprecedented tsunami overwhelmed the defenses that were supposed to protect a complex of five reactors built at the very edge of the sea.

Trying not to get too depressed in the middle of holiday season, we'll go to the good news first.

Again by a wide margin, EWG staffers said the two top good news stories were the growing momentum to limit or ban BPA and (in the messy category), the emergence of evidence that the drilling technique known as hydraulic fracturing really can be a threat to drinking water supplies. Drilling companies have insisted for years that fracking poses no threat to water supplies, but we've been skeptical, and so have many property owners in the states where drilling is intensifying. We're glad to see some hard facts on the table.

First, the Good

Here's the full rundown of the top good news stories as chosen by EWG's researchers and other staff:

hht-baby.jpg1. BPA Feels the Heat
Two months after trend-setting California banned the endocrine-disrupting chemical BPA in baby bottles and sippy cups (as of 2013), the federal Food and Drug Administration agreed under the pressure of a law suit to decide whether to eliminate BPA in all food packaging. Meanwhile, the American Chemistry Council, a trade group that has fought fiercely against the California bill and other legislative curbs on BPA, appeared to throw in the towel, at least part way, as it petitioned the U.S. Food and Drug Administration to "clarify for consumers" that it no longer uses the chemical in children's food containers.

2. Truth Will Out: Fracking Has Tainted Ground Water

Giving the lie to gas drillers' long-standing insistence that hydraulic fracturing to release shale oil and gas has never contaminated drinking water supplies, the Environmental Protection Agency announced that it had detected chemicals associated with fracking in groundwater in Wyoming. Earlier, EWG's own investigation uncovered a long-forgotten 1987 EPA report that found fracking-related contamination in water wells used by West Virginia residents. In the face of mounting public pressure, meanwhile, regulators decided to postpone action on rules that could open the door to widespread drilling and fracking in the vast Delaware River watershed.

3. New Reason for Caution on Cell Phone Radiation

In another case where bad news is seen as good news - because it indicates that important new information is coming to light - the International Agency for Research on Cancer, a branch of the World Health Organization, for the first time listed radiation from cell phones as "possibly carcinogenic" to humans. The jury is still out on the possible health risks from these ubiquitous devices, but the decision was significant for those who live by the precautionary principle.

4. The Grand Canyon Gets Protection
Interior Secretary Ken Salazar took an important step toward protecting the chief water source for California and the Southwest when he extended for 20 years a ban on new uranium mining on 1 million acres around the Grand Canyon. EWG called attention to this looming danger in its report, Conflict at the Canyon.

Water bubbler water spray.jpg5. Getting Rocket Fuel out of Water
Reversing a decision made during the administration of former President George W. Bush, the EPA said it will begin the process of setting legal limits on perchlorate, an ingredient in rocket fuel, and 16 other chemicals known as volatile organics that have contaminated water sources used by millions of Americans.

sugar_bombs_lisa.png6. Blowing the Whistle on Sugar in Kids' Cereals
Bringing renewed attention to a problem that food makers have persistently refused to correct, a widely publicized EWG report pointed out that a number of heavily-marketed children's cereals contain unhealthy amounts of sugar, some of them more than popular desert items.

7. California Moves to Curb Chromium-6
California's state Environmental Protection Agency adopted a first-in-nation health-based standard (public health goal) for hexavalent chromium in drinking water, the initial step in establishing a legal limit in drinking water for this widely found carcinogen that gained public notoriety in the movie Erin Brockovich.

8. HHS Calls for Less Fluoride in Drinking Water

Citing potential health risks to children, the U.S. Department of Health and Human Services proposed in January that utilities a, which EWG and other public health advocates had long recommended. Three days later, the EPA granted a petition by EWG and two other environmental groups to end the use of sulfuryl fluoride, an insecticide and food fumigant that is also a source of fluoride exposure.

9. Sunscreen Rules - Too Little, Too Late
After deliberating for 33 years, the FDA finally got around to proposing rules governing the content and labeling of sunscreen products, but in EWG's view, they fall far short of the mark.

10. Brazilian Blowout Declared Unsafe
The FDA warned the makers of "Brazilian Blowout" in September that the company's hair straightening product, which contains carcinogenic formaldehyde, is "adulterated" and "misbranded." Earlier in the year, EWG's investigation found that a total of 16 companies used high levels of the chemical as an ingredient in similar products.

Now for the top Bad News. Take a deep breath.


2572088060_5c20ff895c_m.jpg1. President Obama Kills Tighter Smog Limits

As summer was winding down, Obama shocked EPA Administrator Lisa Jackson and the environmental community by blocking plans to impose stricter national standards on ozone-containing smog. It was the strongest indication yet that the administration was approaching major environmental decisions with a cold eye on the 2012 election.

2. Fukushima Melts Down
By itself, the Japanese tsunami was a horrendous, almost unimaginable event, one that reminds us that even the most highly developed nations can be left all but helpless when the full forces of nature get unleashed. But what happened at Fukushima had a more profound lesson: that technological hubris, self-serving bureaucracy, lack of transparency and a host of other human failings always have the capacity to take a bad situation - and make it worse. Unfortunately, Japan will be reminded of this lesson every day for decades to come.

3. A Deadly Year for Foodborne Illness
Cantaloupes and sprouts. Record-setting outbreaks of foodborne disease in the United States and Europe underscored once again that assuring food safety is a critical priority. The U.S. listeriosis outbreak, which came just months after Congress passed major new food safety legislation, was linked to cantaloupes grown in Colorado. It killed 29 land sickened at least 139. In Europe, an outbreak ultimately linked to sprouts unleashed an unusually deadly strain of E. coli, killing at least 18 and sickening about 2,000.

4. House Republicans Target EPA and Environmental Regulation

Propelled by the anti-regulatory fervor of the Tea Party and Republicans' desire to blame unemployment on Obama and "job killing" regulation, GOP members of Congress took aim at the EPA and environmental regulations of all types, even voting to block a non-existent rule on rural dust. The cost in lives, illness and economic loss from environmental degradation didn't enter into the discussion.

5. Still No Reform for Outdated Toxics Law

Thirty-five years and counting. That's how long it's been since Congress passed the Toxic Substances Control Act, the only one of the 70s era environmental reforms that has never been updated. In public, there seems to be consensus that it's high time to update a law that allows new chemicals on the market with no meaningful safety testing. But when it comes to actually working out a reform bill in the halls of Congress, that consensus evaporates.

6. Emissions Up, Action Down on Climate Change
Recently released data shows that in 2010, carbon dioxide emissions from fossil fuels jumped by the largest amount of any year since the industrial revolution. But in the United States, parts of Europe and much of the rest of the world, the prospects for concerted international action to curb climate change seemed to be fading away. Que sera, sera?

7. Fracking Wastewater Reaches Rivers, Water Treatment Plans
Wastewater from the natural gas drilling boom, laden with chemical contaminants and sometimes radioactivity, passed through sewage treatment plants that weren't designed for it and ended up in rivers that supply drinking water to cities in Pennsylvania and elsewhere. Meanwhile, the battle over whether and how to allow fracking in New York State neared a climax.

timthumb.php.jpeg8. Federal Judge Blocks S.F. Cell Phone Right-to-Know Ordinance
The EWG-led campaign to require cell phone retailers to post information about cell radiation emissions suffered a setback when a federal judge struck down most of an ordinance passed by the San Francisco City Council, but the battle isn't over. A revised ordinance passed in 2011, but it, too, is being challenged.

9. Ballyhooed Solar Panel Company Goes Belly Up
President Obama's effort to promote a "green economy" alternative to fossil fuels and to help revive the economy took a hit when Solyndra, a California manufacturer of solar panels, declared bankruptcy. Critics used the scandal to attack subsidies for alternative energy programs, but the fact is, petroleum and other fossil fuels have fattened up on federal subsidies for decades.

10. Contaminated Chinese Dry Wall
The online news organization Pro Publica brought national attention to the growing scandal over contaminated Chinese dry wall that emits foul odors, causes appliances to fail and mades people sick. Thousands of homeowners and renters were affected, and the scandal is still unfolding.

"Only Handle It Once (OHIO)" technique is quite an interesting but intense approach to daily productivity

Saw this in a blog post this morning:

Life as a Healthcare CIO 19/01/12 12:00 AM John Halamka
In my recent post Work Induced Attention Deficit Disorder, several commenters asked how I stay focused and productive, speculating that I leverage my limited need for sleep.

Although having a 20 hour day helps, the real secret is that I end each day with an empty inbox.    I have no paper in my office.    I do not keep files other than those that are required for compliance purposes.

The end result is that for every document I'm asked to read, every report I'm ask to write, and every situation I'm asked to management, I only handle the materials once.

What does this mean?

In a typical week, I'm asked to review 4 or 5 articles for journals.   Rather than leaving them to be read at some later time or reading them then deferring the review, I read and review them the day they are assigned.    This enables me to read them once and write the review very efficiently since all the facts are fresh in my mind.

I'm asked to review budgets for various grants, state, and local projects multiple times per week.   I read the budget, ask questions while the numbers are at my fingertips, and await responses.

In my 1000+ emails each day there are 10-20 that require detailed responses.   I leave these to the end of the day when I know I'll have uninterrupted time.   I write the responses and send them while all the details of the issues are clear to me.

Paperwork does occasionally find its way to my desk.  Since all payroll and all purchasing functions are electronic at BIDMC, the paperwork I have to do is mostly for externally regulatory agencies.    I read the paperwork, answer everything, and give it to my assistant to package and mail.

Each day I'm asked to find time for calls, meetings, lectures, travel, and special events.   I look at my calendar in real time and respond with availability - making a decision on the spot if I can or cannot participate.

The end result of this approach is that I truly only handle each issue, document, or phone call once.   It's processed and it's done without delay or a growing inbox.   I work hard not to be the rate limiting step to any process.

Yes,  it can be difficult to juggle the Only Handle it Once (OHIO) approach during a day packed with meetings.    Given that unplanned work and the management of email has become 50% of our jobs, I try to structure my day with no more than 5 hours of planned meetings, leaving the rest of the time to bring closure to the issues discussed in the meetings and complete the other work that arrives.  It's the administrative equivalent of Open Access clinical scheduling.

It's tempting, especially after a long and emotionally tiring day, to break the OHIO principle.   However, doing so only removes time from the next day and makes it even more challenging to process the incoming flow of events.

One last caveat.   OHIO does not mean compromising quality or thoughtfulness.  Simply passing along issues to others without careful consideration does not increase efficiency.   I focus on doing it once to the best of my ability.  For larger projects, I use my "handle it once" approach to set aside a defined time on the weekend when I can do them in one sitting.

OHIO - give it a try and see if the free time it creates enables you to regain depth and counter the evils of work induced attention deficit disorder.

---

This looks like a sure recipe for top productivity, take an issue when it arises, do not let it sleep, and get it over with as soon as it arises. 

Monday, January 16, 2012

What do you do when editors ask you to cite for inflating impact factors?

Ben Goldacre writes,

bengoldacre - secondary blog 16/01/12 11:20 PM

This is an interesting new record of bad behaviour, driven by bibliometrics: academic journals, asking academic authors to cite papers from their own pages, in order to make that journal's impact factor look better. Worse than that, it happened at the fragile moment, where a paper's publication hangs in the balance. Grim!

http://onlinelibrary.wiley.com/doi/10.1111/j.1538-7836.2011.04601.x/abstract

F. Avanzini et al., “Solicited SelfReferencing Undermines the Credibility of Researchers and Journals,” Journal of Thrombosis and Haemostasis (n.d.)


Sirs,

We wish to draw the attention of the readership of this journal and of the scientific community at large to what recently happened to a colleague of ours. The letter accompanying the editor’s request for major revision of a manuscript included the following surprising advice: “The Editors would also greatly appreciate you adding more than two but fewer than six references of articles published in [the Journal involved], above all articles published over the past two years.” A rapid survey among a few colleagues told us that this type of editorial policy is not as exceptional as one might believe. Another editor conveyed the same message to a prospective author: “We would like to emphasize that we attach great importance to cross referencing very recent material on the same topic in [this journal]. Therefore, it would be highly appreciated if you would check the last 2 years of [the same Journal] and add all material relevant to your article to the reference list”.

... It is likely that the forementioned editors’ requests for self-citation of articles published in the previous two years was meant to increase the number of citations and hence inflate the impact factors of their journals.

While it is legitimate that a scientific journal is pleased to see its impact factor raising as a consequence of the increased recognition of its published articles, it is obviously unacceptable for the increase to be artificially triggered through the practice of soliciting self-citations. If this kind of requests from editors spreads, prospective authors may soon feel obliged to refer to articles from the journal to which they are submitting their work, even when citations are unnecessary or irrelevant. With time this may become a custom and a way to capture the benevolence of editors, though not serving scientific merits and aims.


This is not just impact factor driven examples of unacceptable behaviour, it's wrong. Sad that such a request came from the editors of a journal. 

Sunday, January 15, 2012

How to Brew That Perfect Cup Of Coffee

Brewing a perfect cup of coffee is personal. Here Michael Arrington writes (see below) fairly detailed instructions on brewing the perfect cup of coffee. Note that he suggests using regular drip coffee maker and he uses a good quality of coffee carafe with filter and the process takes one minute. He also uses burr grinder. For me, I have found using a french press, ground coffee, and water from electric kettle quite satisfying. 

UNCRUNCHED 10/01/12 6:18 AM Michael Arrington Uncategorized

I got up early today to watch the debut of the new Charlie Rose CBS morning show. The first thing I do every morning is drink a cup of coffee, but I really needed it this morning when I crawled out of bed at 6:30.

When I’m in San Francisco I usually get coffee at Philz because it’s the closest thing to perfect coffee that I’ve ever had, and it’s near where I stay when I’m there. But when I’m at home in Seattle I do it myself.

I tend to get a bit manic about certain things (like blogging, and making coffee). The last few years I’ve experimented with a dozen or so different ways to brew a perfect cup. A standard Mr. Coffee (which makes a surprisingly good cup of coffee if you do it right). The French Press (near perfect but too easy to create a bitter brew). I’ve even tried the crazier stuff out there like the AeroPress, which does make great coffee but ends up being too complicated and time consuming for me.

The last six months or so I’ve settled on what I think is the perfect brewing process. It’s easy, has very little cleanup and it’s hard to screw up.

Step one: Coffee. I like Peet’s House Blend, but there are lots of great coffees out there. I often end up buying Starbucks Breakfast blend since it’s easier to find up here in Seattle. Some people like a darker roast, but I prefer the higher caffeine kick from a lighter roast coffee.

Step two: Grind that coffee. You need a proper burr grinder if you want to avoid a bitter cup of coffee. Trust me. The problem is you can spend an almost unlimited amount of money on a good burr grinder. I chose a relatively inexpensive Bodum grinder that I’ve been very happy with. For a single cup of coffee I grind it very coarse to avoid bitterness for about 8 seconds.

Step Three: Hot water. Seems simple but I don’t like spending time with a kettle or the microwave. Instead I bought a Zojirushi Hybrid Water Boiler (Jack Dorsey talked me into this a year ago). I have hot water on tap all the time at 195 degrees, although there are three temperature settings to choose from.

Step Four: Brew. Since you’re using a burr grinder it’s going to be hard to screw the coffee up at this point. A cheap drip coffee maker is going to be just fine. But I use a Chemex glass coffee carafe. No mechanical parts, it will last as long as you don’t drop it. Just put a filter in with the coffee and add water from the Zojirushi boiler. I fill the filter up twice, using a spoon to get the coffee back into the water the second time since it sticks to the side of the filter.

Step Five: This whole procedure has taken you about 1 minute, most of that is waiting for the coffee to drip. Pour, drink, be happy.