Technology

How to Make Educational Videos with a Tablet

We’re well into the Biggest Ideas in the Universe series, and some people have been asking how I make the actual videos. I explained the basic process in the Q&A video for Force, Energy, and Action – embedded below – but it turns out that not everyone watches every single video from start to finish (weird, I know), and besides the details have changed a little bit. And for some reason a lot of people want to do pedagogy via computer these days.

The Biggest Ideas in the Universe | Q&A 3 - Force, Energy, and Action

So let me explain my process here, starting with the extremely short version. You need:

  • A computer
  • A video camera (webcam okay)
  • A microphone (computer mic okay)
  • A tablet with writing instrument (e.g. iPad Pro and Apple Pencil)
  • Writing app on the tablet (e.g. Notability)
  • Screen capturing/video editing software on the computer (e.g. Screenflow)
  • Whatever wires and dongles are required to hook all that stuff together.

Hmm, looking over that list it doesn’t seem as simple as I thought. And this is the quick-and-easy version! But you can adapt the level of commitment to your own needs.

The most important step here is to capture your writing, in real time, on the video. (You obviously don’t have to include an image of yourself at all, but it makes things a bit more human, and besides who can possibly talk without making gestures, right?) So you need some kind of tablet to write on. I like the iPad Pro quite a bit, but note that not all iPad models are compatible with a Pencil (or other stylus). And writing with your fingers just doesn’t cut it here.

You also need an app that does that. I am quite fond of both Notability and Notes Plus. (I’m sure that non-iOS ecosystems have their own apps, but there’s no sense in which I’m familiar with the overall landscape; I can only tell you about what I use.) These two apps are pretty similar, with small differences at the edges. When I’m taking notes or marking up PDFs, I’m actually more likely to use Notes Plus, as its cutting/pasting is a bit simpler. And that’s what I used for the very early Biggest Ideas videos. But I got numerous requests to write on a dark background rather than a light one, which is completely reasonable. Notability has that feature and as far as I know Notes Plus does not. And it’s certainly more than good enough for the job.

Then you need to capture your writing, and your voice, and optionally yourself, onto video and edit it together. (Again, no guarantees that my methods are simplest or best, only that they are mine.) Happily there are programs that do everything you want at once: they will capture video from a camera, separately capture audio input, and also separately capture part or all of your computer screen, and/or directly from an external device. Then they will let you edit it all together how you like. Pretty sweet, to be honest.

I started out using Camtasia, which worked pretty well overall. But not perfectly, as I eventually discovered. It wasn’t completely free of crashes, which can be pretty devastating when you’re 45 minutes into an hour-long video. And capture from the iPad was pretty clunky; I had to show the iPad screen on my laptop screen, then capture that region into Camtasia. (The app is smart enough to capture either the whole screen, or any region on it.) By the way, did you know you can show your iPhone/iPad screen on your computer, at least with a Mac? Just plug the device into the computer, open up QuickTime, click “new movie recording,” and ask it to display from the mobile device. Convenient for other purposes.

But happily on Screenflow, which I’ve subsequently switched to, that workaround isn’t necessary; it will capture directly from your tablet (as long as it’s connected to your computer). And in my (very limited) experience it seems a bit more robust and user-friendly.

Okay, so you fire up your computer, open Screenflow, plug in your tablet, point your webcam at yourself, and you’re ready to go. Screenflow will give you a window in which you can make sure it’s recording all the separate things you need (tablet screen, your video, your audio). Hit “Record,” and do your thing. When you’re done, hit “Stop recording.”

What you now have is a Screenflow document that has different tracks corresponding to everything you’ve just recorded. I’m not going to do a full tutorial about editing things together — there’s a big internet out there, full of useful advice. But I will note that you will have to do some editing, it’s not completely effortless. Fortunately it is pretty intuitive once you get the hand of the basic commands. Here is what your editing window in Screenflow will look like.

Main panel at the top left, and all of your tracks at the bottom — in this case (top to bottom) camera video, audio, iPad capture, and static background image. The panel on the right toggles between various purposes; in this case it’s showing all the different files that go into making those tracks. (The video is chopped up into multiple files for reasons having to do with my video camera.) Note that I use a green screen, and one of the nice things about Screenflow is that it will render the green transparent for you with a click of a button. (Camtasia does too, but I’ve found that it doesn’t do as well.)

Editing features are quite good. You can move and split tracks, resize windows, crop windows, add text, set the overall dimensions, etc. One weird thing is that some of the editing features require that you hit Control or Shift or whatever, and when exactly you’re supposed to do this is not always obvious. But it’s all explained online somewhere.

So that’s the basic setup, or at least enough that you can figure things out from there. You also have to upload to YouTube or to your class website or whatever you so choose, but that’s up to you.

Okay now onto some optional details, depending on how much you want to dive into this.

First, webcams are not the best quality, especially the ones built-in to your laptop. I thought about using my iPhone as a camera — the lenses etc. on recent ones are quite good — but surprisingly the technology for doing this is either hard to find or nonexistent. (Of course you can make videos using your phone, but using your phone as a camera to make and edit videos elsewhere seems to be much harder, at least for me.) You can upgrade to an external webcam; Logitech has some good models. But after some experimenting I found it was better just to get a real video camera. Canon has some decent options, but if you already have a camera lying around it should be fine; we’re not trying to be Stanley Kubrick here. (If you’re getting the impression that all this costs money … yeah. Sorry.)

If you go that route, you have to somehow get the video from the camera to your computer. You can get a gizmo like the Cam Link that will pipe directly from a video camera to your computer, so that basically you’re using the camera as a web cam. I tried and found that it was … pretty bad? Really hurt the video quality, though it’s completely possible I just wasn’t setting things up properly. So instead I just record within the camera to an SD card, then transfer to the computer after the fact. For that you’ll need an SD to USB adapter, or maybe you can find a camera that can do it over wifi (mine doesn’t, sigh). It’s a straightforward drag-and-drop to get the video into Screenflow, but my camera chops them up into 20-minute segments. That’s fine, Screenflow sews them together seemlessly.

You might also think about getting a camera that can be controlled wirelessly, either via dedicated remote or by your phone, so that you don’t have to stand up and walk over to it every time you want to start and stop recording. (Your video will look slightly better if you place the camera away from you and zoom in a bit, rather than placing it nearby.) Sadly this is something I also neglected to do.

If you get a camera, it will record sound as well as video, but chances are that sound won’t be all that great (unless maybe you use a wireless lavalier mic? Haven’t tried that myself). Also your laptop mic isn’t very good, trust me. I have an ongoing podcast, so I am already over-equipped on that score. But if you’re relatively serious about audio quality, it would be worth investing in something like a Blue Yeti.

If you want to hear the difference between good and non-so-good microphones, listen to the Entropy video, then the associated Q&A. In the latter, by mistake I forgot to turn on the real mic, and had to use another audio track. (To be honest I forget whether it was from the video camera or my laptop.) I did my best to process the latter to make it sound reasonable, but the difference is obvious.

Of course if you do separately record video and audio, you’ll have to sync them together. Screenflow makes this pretty easy. When you import your video file, it will come with attached audio, but there’s an option to — wait for it — “detach audio.” You can then sync your other audio track (the track will display a waveform indicating volume, and just slide until they match up), and delete the original.

Finally, there’s making yourself look pretty. There is absolutely nothing wrong with just showing whatever office/home background you shoot in front of — people get it. But you can try to be a bit creative with a green screen, and it works much better than the glitchy Zoom backgrounds etc.

Bad news is, you’ll have to actually purchase a green screen, as well as something to hold it up. It’s a pretty basic thing, a piece of cloth or plastic. And, like it or not, if you go this route you’re also going to have to get lights to point at the green screen. If it’s not brightly lit, it’s much harder to remove it in the editor. The good news is, once you do all that, removing the green is a snap in Screenflow (which is much better at this than Camtasia, I found).

You’ll also want to light yourself, with at least one dedicated light. (Pros will insist on at least three — fill, key, and backlight — but we all have our limits.) Maybe this is not so important, but if you want a demonstration, my fondness for goofing up has once again provided for you — on the Entanglement Q&A video, I forgot to turn on the light. Difference in quality is there for you to judge.

My home office looks like this now. At least for the moment.

Oh right one final thing. If you’re making hour-long (or so) videos, the file sizes get quite big. The Screenflow project for one of my videos will be between 20 and 30 GB, and I export to an mp4 that is another 5 GB or so. It adds up if you make a lot of videos! So you might think about investing in an external hard drive. The other options are to save on a growing collection of SD cards, or just delete files once you’ve uploaded to YouTube or wherever. Neither of which was very palatable for me.

You can see my improvement at all these aspects over the series of videos. I upgraded my video camera, switched from light background to dark background on the writing screen, traded in Camtasia for Screenflow, and got better at lighting. I also moved the image of me from the left-hand side to the right-hand side of the screen, which I understand makes the captions easier to read.

I’ve had a lot of fun and learned a lot. And probably put more work into setting things up than most people will want to. But what’s most important is content! If you have something to say, it’s not that hard to share it.

How to Make Educational Videos with a Tablet Read More »

14 Comments

Memory-Driven Computing and The Machine

Back in November I received an unusual request: to take part in a conversation at the Discover expo in London, an event put on by Hewlett Packard Enterprise (HPE) to showcase their new technologies. The occasion was a project called simply The Machine — a step forward in what’s known as “memory-driven computing.” On the one hand, I am not in any sense an expert in high-performance computing technologies. On the other hand (full disclosure alert), they offered to pay me, which is always nice. What they were looking for was simply someone who could speak to the types of scientific research that would be aided by this kind of approach to large-scale computation. After looking into it, I thought that I could sensibly talk about some research projects that were relevant to the program, and the technology itself seemed very interesting, so I agreed stop by London on the way from Los Angeles to a conference in Rome in honor of Georges Lemaître (who, coincidentally, was a pioneer in scientific computing).

Everyone knows about Moore’s Law: computer processing power doubles about every eighteen months. It’s that progress that has enabled the massive technological changes witnessed over the past few decades, from supercomputers to handheld devices. The problem is, exponential growth can’t go on forever, and indeed Moore’s Law seems to be ending. It’s a pretty fundamental problem — you can only make components so small, since atoms themselves have a fixed size. The best current technologies sport numbers like 30 atoms per gate and 6 atoms per insulator; we can’t squeeze things much smaller than that.

So how do we push computers to faster processing, in the face of such fundamental limits? HPE’s idea with The Machine (okay, the name could have been more descriptive) is memory-driven computing — change the focus from the processors themselves to the stored data they are manipulating. As I understand it (remember, not an expert), in practice this involves three aspects:

  1. Use “non-volatile” memory — a way to store data without actively using power.
  2. Wherever possible, use photonics rather than ordinary electronics. Photons move faster than electrons, and cost less energy to get moving.
  3. Switch the fundamental architecture, so that input/output and individual processors access the memory as directly as possible.

Here’s a promotional video, made by people who actually are experts.

The project is still in the development stage; you can’t buy The Machine at your local Best Buy. But the developers have imagined a number of ways that the memory-driven approach might change how we do large-scale computational tasks. Back in the early days of electronic computers, processing speed was so slow that it was simplest to store large tables of special functions — sines, cosines, logarithms, etc. — and just look them up as needed. With the huge capacities and swift access of memory-driven computing, that kind of “pre-computation” strategy becomes effective for a wide variety of complex problems, from facial recognition to planing airline routes.

It’s not hard to imagine how physicists would find this useful, so that’s what I briefly talked about in London. Two aspects in particular are pretty obvious. One is searching for anomalies in data, especially in real time. We’re in a data-intensive era in modern science, where very often we have so much data that we can only find signals we know how to look for. Memory-driven computing could offer the prospect of greatly enhanced searches for generic “anomalies” — patterns in the data that nobody had anticipated. You can imagine how that might be useful for something like LIGO’s search for gravitational waves, or the real-time sweeps of the night sky we anticipate from the Large Synoptic Survey Telescope.

The other obvious application, of course, is on the theory side, to large-scale simulations. In my own bailiwick of cosmology, we’re doing better and better at including realistic physics (star formation, supernovae) in simulations of galaxy and large-scale structure formation. But there’s a long way to go, and improved simulations are crucial if we want to understand the interplay of dark matter and ordinary baryonic physics in accounting for the dynamics of galaxies. So if a dramatic new technology comes along that allows us to manipulate and access huge amounts of data (e.g. the current state of a cosmological simulation) rapidly, that would be extremely useful.

Like I said, HPE compensated me for my involvement. But I wouldn’t have gone along if I didn’t think the technology was intriguing. We take improvements in our computers for granted; keeping up with expectations is going to require some clever thinking on the part of engineers and computer scientists.

Memory-Driven Computing and The Machine Read More »

19 Comments

We Are All Machines That Think

My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”


Active_brainJulien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

We Are All Machines That Think Read More »

141 Comments

Chaos, Hallucinogens, Virtual Reality, and the Science of Self

Chaotic Awesome is a webseries hosted by Chloe Dykstra and Michele Morrow, generally focused on all things geeky, such as gaming and technology. But the good influence of correspondent Christina Ochoa ensures that there is also a healthy dose of real science on the show. It was a perfect venue for Jennifer Ouellette — science writer extraordinaire, as well as beloved spouse of your humble blogger — to talk about her latest masterwork, Me, Myself, and Why: Searching for the Science of Self.

Jennifer’s book runs the gamut from the role of genes in forming personality to the nature of consciousness as an emergent phenomenon. But it also fits very naturally into a discussion of gaming, since our brains tend to identify very strongly with avatars that represent us in virtual spaces. (My favorite example is Jaron Lanier’s virtual lobster — the homuncular body map inside our brain is flexible enough to “grow new limbs” when an avatar takes a dramatically non-human form.) And just for fun for the sake of scientific research, Jennifer and her husband tried out some psychoactive substances that affect the self/other boundary in a profound way. I’m mostly a theorist, myself, but willing to collect data when it’s absolutely necessary.

Chaos, Hallucinogens, Virtual Reality, and the Science of Self Read More »

26 Comments

A Boy and His Atom

Ready for your close-up? I mean, really close up. IBM has released the world’s highest-resolution movie: an animated short film in which what you’re seeing are individual atoms, manipulated by a scanning tunneling microscope. Here is “A Boy and His Atom”:

A Boy And His Atom: The World's Smallest Movie

And here is an explanation of how it was made:

Moving Atoms: Making The World's Smallest Movie

A Boy and His Atom Read More »

7 Comments

Why Is Code Hard to Understand?

Anyone who has tried to look at somebody else’s computer code — especially in the likely event that it hasn’t been well-commented — knows how hard it is to figure out what’s going on. (With sometimes dramatic consequences.) There are probably numerous reasons why, having to do with the difference between heuristic human reasoning and the starkly literal nature of computer instructions. Here’s a short paper that highlights one reason in particular: people tend to misunderstand code when it seems like it should be doing one thing, while it’s actually doing something else. (Via Simon DeDeo.)

What Makes Code Hard to Understand?
Michael Hansen, Robert L. Goldstone, Andrew Lumsdaine

What factors impact the comprehensibility of code? Previous research suggests that expectation-congruent programs should take less time to understand and be less prone to errors. We present an experiment in which participants with programming experience predict the exact output of ten small Python programs. We use subtle differences between program versions to demonstrate that seemingly insignificant notational changes can have profound effects on correctness and response times. Our results show that experience increases performance in most cases, but may hurt performance significantly when underlying assumptions about related code statements are violated.

As someone who is jumping back into programming myself after a lengthy hiatus, this stuff is very interesting. I wonder how far we are away from natural-language programming, where we can just tell the computer what we want in English and it will reliably do it. (Guess: pretty far, but not that far.)

Why Is Code Hard to Understand? Read More »

27 Comments

More Gradual Erosion in the Dignity of Humankind

The next obvious step in the robots’ scheme to take over the world: develop an unbeatable strategy for Rock-Paper-Scissors. (The robots are patient, their plan has a lot of steps.)

Janken (rock-paper-scissors) Robot with 100% winning rate

It didn’t bother me when computers became better than us at chess, but this is outrageous.

More Gradual Erosion in the Dignity of Humankind Read More »

18 Comments

Paying for Creativity

Over on Facebook, a single blog post was linked to by four different friends of mine: a physicist, a science writer/spouse, a saxophone player, and a screenwriter. Clearly something has struck a nerve!

The common thread binding together these creative people who make a living off of their creative work is the impact of technology on how we distribute intellectual property. In other words: do you ever pay for music any more?

Emily White doesn’t. She’s an intern at NPR’s All Things Considered, where she wrote a blog post saying that she “owns” over 11,000 songs, but has only paid for about 15 CD’s in her entire life. The rest were copied from various sources or shared over the internet. She understands that the people who made the music she loves deserve to be paid for their work, and she’s willing to do so — but only if it’s convenient, and apparently the click it takes to purchase from iTunes doesn’t qualify.

The brilliant (and excessively level-headed) response that my friends all linked to was penned by David Lowery. He makes the case much better than I would have, so read him. Making the case is necessary; there is a long tail of compensation in creative fields, and we’re all familiar with the multi-millionaires, so it’s easy to forget the much larger numbers of people sweating to earn a decent living. Not everyone has the ability to create work that other people are willing to pay for, of course; the universe does not owe you the right to earn money from your writing or thinking or playing. But when other people appreciate and benefit from your stuff, you do have a right to be compensated, I think.

Coincidentally, today I stumbled across a book that I didn’t know existed — one about me! Or at least, one whose title is my name. Since nobody other than my Mom thinks I deserve to have a book written about me, my curiosity was piqued.

Turns out that the book (apparently) isn’t so much about me, as a collection of things I have written, supplemented by Wikipedia pages. None of which I knew about at all. In other words, for $60 you can purchase a 160-page book of things you can find on the internet for free. There is a company, VDM Publishing, that specializes in churning such things out via print-on-demand. Turning Wikipedia pages into a book is bizarre and disreputable, but possibly legal. Taking blog posts and articles I have written and including them in the book is straight-up illegal, I’m afraid.

Fortunately, I’m not losing much value here, as only a crazy person would pay $60 for an unauthorized collection of Wikipedia articles and blog posts, and I like to think that my target audience is mostly non-crazy people. But it’s a bad sign, I would think. Stuff like this is only going to become more popular.

Don’t let that dissuade you from purchasing highly authorized collections of very good blog posts! For example The Best Science Writing Online 2012, appearing this September. No posts from Cosmic Variance this year, but I have it on good authority that the editor worked really hard to make this a standout collection.

Paying for Creativity Read More »

57 Comments

Dismal Global Equilibria

The Civilization series of games takes players through the course of history, allowing them to guide a society/nation from way back in prehistory up through the near future (say, 2100). You develop technologies, choose political systems, and raise armies. There are various ways to “win” the game: military conquest, achieving a just and happy society, or building a spaceship that will travel to Alpha Centauri. It’s a great pastime for any of us who harbor the suspicion that the world would be a better place if we were installed as a benevolent dictator.

Although the game is supposed to take you to the near future, apparently (I’ve never played) you can keep going if you choose to. Which is exactly what one commenter at Reddit did: he has been nursing a single game of Civilization II for ten years now, bringing his virtual global society up to the year 3991 AD. (Via It’s Okay to Be Smart, a wonderful blog.) At which point we may ask: what have we learned?

The news is not good. If you’ve ever read 1984, the outcome will be eerily familiar. I can do no better than quote:

  • The world is a hellish nightmare of suffering and devastation.
  • There are 3 remaining super nations in the year 3991 A.D, each competing for the scant resources left on the planet after dozens of nuclear wars have rendered vast swaths of the world uninhabitable wastelands.
  • The ice caps have melted over 20 times (somehow) due primarily to the many nuclear wars. As a result, every inch of land in the world that isn’t a mountain is inundated swamp land, useless to farming. Most of which is irradiated anyway.

It gets better from there.

What we actually learn about is the structure of the game. We have one player against the computer (who manages multiple civilizations), each with certain goals — a paradigmatic game theory problem. Such games can have “equilibrium strategies,” where no player can make a unilateral change that would improve their situation. Assuming that this player isn’t simply missing something, it’s likely that the game has reached one such equilibrium. That could be the only equilibrium, or there could be a happier one that might have been reached by making different decisions along the way.

What we would like to learn, but can’t, is whether this has any relevance to the real globe. It might! But maybe not. The Earth isn’t a closed system, so the “escape to another planet” option is on the table. But the Solar System is quite finite, and largely forbidding, and other stars are really far away. So limiting our attention to the Earth alone isn’t necessarily a bad approximation.

Right now the human population of the Earth is very far from equilibrium, either politically, or technologically, or socially, or simply in terms of sheer numbers. A real equilibrium wouldn’t be burning through finite resources like fossil fuels at such a prodigious rate, continually inventing new technologies, and constantly re-arranging its political map. But it’s possible (probably unlikely) that we could reach a quasi-equilibrium state in another couple of centuries. With a system as complicated as human civilization on Earth, naive extrapolation of past trends toward the future isn’t likely to tell us much. But “sustainable” isn’t a synonym for “desirable.” If there could be such a near-term equilibrium, would it be a happy one, or the game-prognosticated future of endless war and suffering?

Not clear. I have some measure of optimism, based on the idea that real people wouldn’t simply persist in the same cycles of conflict and misery for indefinite periods of time. It only seems that way sometimes.

Dismal Global Equilibria Read More »

22 Comments

Technological Applications of the Higgs Boson

Can you think of any?

Here’s what I mean. When we set about justifying basic research in fundamental science, we tend to offer multiple rationales. One (the easy and most obviously legitimate one) is that we’re simply curious about how the world works, and discovery is its own reward. But often we trot out another one: the claim that applied research and real technological advances very often spring from basic research with no specific technological goal. Faraday wasn’t thinking of electronic gizmos when he helped pioneer modern electromagnetism, and the inventors of quantum mechanics weren’t thinking of semiconductors and lasers. They just wanted to figure out how nature works, and the applications came later.

So what about contemporary particle physics, and the Higgs boson in particular? We’re spending a lot of money to look for it, and I’m perfectly comfortable justifying that expense by the purely intellectual reward associated with understanding the missing piece of the Standard Model of particle physics. But inevitably we also mention that, even if we don’t know what it will be right now, it’s likely (or some go so far as to say “inevitable”) that someday we’ll invent some marvelous bit of technology that makes crucial use of what we learned from studying the Higgs.

So — anyone have any guesses as to what that might be? You are permitted to think broadly here. We’re obviously not expecting something within a few years after we find the little bugger. So imagine that we have discovered it, and if you like you can imagine we have the technology to create Higgses with a lot less overhead than a kilometers-across particle accelerator. We have a heavy and short-lived elementary particle that couples preferentially to other heavy particles, and represents ripples in the background field that breaks electroweak symmetry and therefore provides mass. What could we possibly do with it?

Specificity and plausibility will be rewarded. (Although there are no actual rewards offered.) So “cure cancer” gets low marks, while “improve the rate of this specific important chemical reaction” would be a lot better.

Let your science-fiction-trained imaginations rome, and chime in.

Technological Applications of the Higgs Boson Read More »

106 Comments
Scroll to Top