Why Hollywood ignores Hiroshima

Ronald Bergan ponders why American films are so reluctant to depict the Hiroshima bombing.

In the opening dialogue of Alain Resnais’s masterful Hiroshima Mon Amour (1959), the reference to which all other films on the subject must incline, a French actor in Hiroshima for a film, tells her Japanese lover that she has seen everything in Hiroshima – the exhibits in the museum, the news footage of the injured and dying. However, he keeps insisting, “You saw nothing in Hiroshima. Nothing.”

American cinema has seen nothing in Hiroshima. Nor has it ever tried.

Read the full article on The Guardian.

Tagged ,

Howdoi – Instant Coding Answers via Command Line

The idea is simple – ask a question and get an answer.

$ howdoi format date bash
> DATE=`date +%Y-%m-%d`

Howdoi solves life’s little coding mysteries like the always frustrating command line flags for tar (as illustrated by XKCD).

Github repo
Blog post describing the tool.

The NIPS consistency experiment was an amazing, courageous move by the organizers this year to quantify the randomness in the review process. They split the program committee down the middle, effectively forming two independent program committees. Most submitted papers were assigned to a single side, but 10% of submissions 166 were reviewed by both halves of the committee. This let them observe how consistent the two committees were on which papers to accept. For fairness, they ultimately accepted any paper that was accepted by either committee.

The results were revealed this week: of the 166 papers, the two committees disagreed on the fates of 25.9% of them: 43. But this “25%” number is misleading, and most people I’ve talked to have misunderstood it: it actually means that the two committees disagreed more than they agreed on which papers to accept.

Read the full article “The NIPS Experiment” by Moritz Hardt.

>50% of NIPS papers would be rejected if the review process was rerun

Tagged ,

Show and Tell: A Neural Image Caption Generator

Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU score the higher the better on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU score improvements on Flickr30k, from 55 to 66, and on SBU, from 19 to 27.

Full article and PDF.

via Google Research Blog

Tagged , , ,

DeepLearning.University: an annotated deep learning bibliography

DeepLearning.University – An Annotated Deep Learning Bibliography | Memkite.

An impressive, annotated bibliography of recent deep learning papers. Doesn’t include publications before 2014 but still quite impressive and comprehensive.

Tagged ,

Digital X-ray images in 5 seconds

Wanting to replace the medical equipment for taking X-rays, experts in Mexico have created a system of digital x-ray imaging, which replaces the traditional plaque by a solid detector, which delivers results in five seconds. Analog equipment take six minutes to develop the traditional film.

Read the full article on ScienceDaily.


Machine Learning, meet Computer Vision

Computer vision, the field of building computer algorithms to automatically understand the contents of images, grew out of AI and cognitive neuroscience around the 1960s. “Solving” vision was famously set as a summer project at MIT in 1966, but it quickly became apparent that it might take a little longer! The general image understanding task remains elusive 50 years later, but the field is thriving. Dramatic progress has been made, and vision algorithms have started to reach a broad audience, with particular commercial successes including interactive segmentation available as the “Remove Background” feature in Microsoft Office, image search, face detection and alignment, and human motion capture for Kinect. Almost certainly the main reason for this recent surge of progress has been the rapid uptake of machine learning ML over the last 15 or 20 years.

This first post in a two-part series will explore some of the challenges of computer vision and touch on the powerful ML technique of decision forests for pixel-wise classification.

Read the full article by Jamie Shotton, Antonio Criminisi and Sebastian Nowozin on TechNet Blogs – Machine Learning.

Tagged , , ,