Sneak Preview [Encore Publication]: Behind the scenes on my ongoing Human/Machine Dance Project

How does the human body move when interacting with artificial intelligence technologies? This is the question that my collaborator Carly Lave embarks on a year-long Fulbright grant to research. A talented dancer, choreographer, and scholar, Carly will present her research findings in visual form via a self-choreographed dance solo performance. I approached her to collaborate on a photography series informed by her research goals and artistry as a dancer. We explored several visual themes, each related to the timely question of how we humans will be transformed by increasing immersion into advanced technologies, including virtual reality, robotics, and interconnectivity. To complete this ongoing project, we will continue to collaborate across two continents during Carly’s research fellowship. Stay tuned!

In the meantime, I’d like to share a few early images from this work-in-process along with some words about how they were made.  It can be tricky to imagine a visual concept and realize it via photography, and all the more challenging when the concept is abstract like the ones Carly and I selected.  I’ll present four of our initial concepts, each illustrated with an image and a description of the techniques required to execute it.

  1. Virtual Reality Motion Study: Virtual meets reality as Carly’s body floats through the physical world while her motion is informed by her interactions with the virtual world playing inside her headset.This concept sounds simple but is quite difficult to execute photographically.  We wanted to capture Carly’s physical motion in the real world as she reacts to the experience of the VR world.  Images like these require a long time exposure (here about 15 seconds), which in turn necessitates shooting in a darkened photography studio.  I used a black backdrop and continuous LED lighting to illuminate Carly as she moved across the studio, with a single studio strobe light at the front right part of the set to capture her final pose at the end of the exposure.  The strobe was set to trigger as the camera’s shutter closed at the end of each shot (“rear curtain sync”).  Using this technique, Carly’s motion can be traced during the whole exposure but the vivid exposure is what we see last.  Considerable post-processing is then required to clean up the scene.
  2. Interconnectivity: Carly shared, “I found myself visualizing my body wrapped in cables coiled around my limbs and torso. I was thinking about the body in relation to embedded systems here, and how now all of our world is connected through cables either in the air or sea.”To shoot this visual concept, we used a dance studio with a mirrored wall.  Using different angles to obtain a variety of perspectives, I shot both Carly and her reflection as she improvised motion along the wall.  The technique conveys the impression of the interconnectedness between a human and a similar being across a network.  Post-processing was required to remove clutter and render the background as white.
  3. Human (de-)Evolution Series: A whimsical re-imagining of the classic ape-to-human evolution series, this montage asks us to consider whether technology contributes to or detracts from human evolution.This striking montage is actually quite straightforward to execute.  In a photography studio against a black backdrop, I shot images of Carly in each stage of the evolution series and then combined them using layers in Photoshop.
  4. Robotic Motion: Machines often perform the same tasks traditionally undertaken by humans, but the robot’s motion is constrained by its programming. How might the human dancer’s motion become similarly constrained if her movement is choreographed by programming instructions?For this concept, my job was fairly easy and Carly had to do the heavy lifting.  She choreographed movement indicative of robotic motion while I captured a series of images using a studio strobe light and a black fabric backdrop.

I hope this behind-the-scenes peek at my ongoing passion project will help inspire your own creative process.  It’s important to be personally and deeply invested in a project before you begin.  Select your partner(s) carefully and plan thoroughly.  Then the process becomes joyful and exhilarating as you begin to bring your concept to life!

Have you carried out a photography project?  Please share your key learnings–positive and otherwise–here!

Want to read more posts about what to photography while traveling or near home?  Find them all here: Posts about What to Shoot.

 

Brave New World [Encore Publication]: AI tools for photographers are improving

As a working professional photographer who also spent nearly 30 years as a technology manager/executive, I’ve long had an interest in the intersection between art/creativity and technology.  Recent attempts to marry photography with artificial intelligence have ranged from useful (facial recognition) to silly (Instagram filters) to hype (an expensive camera that several days after you shoot sends you only the images it deems worthy).  But as pattern matching algorithms improve and machine learning becomes more reliable, we are starting to see some amazing applications at the intersection of AI and photography.

I’ve recently been playing around with two good examples from Adobe.  Available only in the online version of Lightroom as “Technology Previews”, these tools enable you to search all your images for specific attributes and to have the AI automatically select what it determines to be your best photos.

To activate these new tools, go to https://lightroom.adobe.com, log in using your Adobe Creative Cloud credentials, and then click on the Lighroom logo in the upper left and select “Technology Previews” from the drop-down menu.  Click the check box next to “Best Photos”, and you’re good to go.

There are two main tools available at this time:

  1. Intelligent Photo Search: This is already very impressive technology.  You can search all or a subset of your images using any natural language term you want.  You could, for example, search all your images for photos of cats, or of mountains, or of dancers, or of waterfalls.  The more specific your search term is, the more accurate the results are likely to be.  When I searched for “waterfall” or for “dancer”, the AI seemed to get many or most of my photos featuring those themes, and only occasionally did it include photos that did not feature those themes.  When my search terms were broader, like “clouds” or “mountains”, the results were less accurate.  Aesthetic searches, say for the color “blue” or the effect of “motion” resulted in mostly accurate selections of images featuring these concepts.  While there are a few false matches, and likely quite a few more errors of omission of images that should have matched, this technology is quite useful in its current state.
  2. Best Photos Selection: This one is more of a work in process.  You can select any of your online galleries and ask the AI to select what it “thinks” the best photos are.  You can move a slider to increase or decrease how selective this tool is.  As a default, it shows you its picks for the top half of your photos, and then you can refine the selectivity to include more or fewer photos.  I tried this advanced technology using several of my recent photo galleries.  In most cases, it included my two or three favorite images in its initial selection of the top half of all the photos, but dropped them from its cut as I increased the selectivity.  In one gallery, for example, an image that was recently selected as a favorite by the editors of “National Geographic” was dropped by Adobe’s AI in the first cut of 10% of the images.  That image was quite artsy and abstract, and it’s not reasonable to expect that a machine could choose it as special.  Yet in another of my galleries, the AI included an image that recently won a major local competition in its final cut of just 1% of the images.  That image is a more traditional landscape that could reasonably be evaluated by a machine as a “good” photograph.

The bottom line here is that the applications of advanced technology to the art of photography are improving at an astonishing rate.  While neither of Adobe’s AI tools is as good as a human artist at selecting images by their features or their quality, both tools are off to an impressive start and one of them (Intelligent Photo Search) is already very usable.  I would not be surprised if, in a year or two, this technology advances to the point where machines can be making decisions about photography along with humans.  Both human and AI evaluations will have their strengths and weaknesses, and I can see them coexisting for the foreseeable future.  I recommend we all, as photographers, get steeped in this advanced technology and prepare for a future in which man and machine will both play a role in sophisticated evaluation of images.

Sneak Preview [Encore Publication]: Behind the scenes on my ongoing Human/Machine Dance Project

How does the human body move when interacting with artificial intelligence technologies? This is the question that my collaborator Carly Lave embarks on a year-long Fulbright grant to research. A talented dancer, choreographer, and scholar, Carly will present her research findings in visual form via a self-choreographed dance solo performance. I approached her to collaborate on a photography series informed by her research goals and artistry as a dancer. We explored several visual themes, each related to the timely question of how we humans will be transformed by increasing immersion into advanced technologies, including virtual reality, robotics, and interconnectivity. To complete this ongoing project, we will continue to collaborate across two continents during Carly’s research fellowship. Stay tuned!

In the meantime, I’d like to share a few early images from this work-in-process along with some words about how they were made.  It can be tricky to imagine a visual concept and realize it via photography, and all the more challenging when the concept is abstract like the ones Carly and I selected.  I’ll present four of our initial concepts, each illustrated with an image and a description of the techniques required to execute it.

  1. Virtual Reality Motion Study: Virtual meets reality as Carly’s body floats through the physical world while her motion is informed by her interactions with the virtual world playing inside her headset.This concept sounds simple but is quite difficult to execute photographically.  We wanted to capture Carly’s physical motion in the real world as she reacts to the experience of the VR world.  Images like these require a long time exposure (here about 15 seconds), which in turn necessitates shooting in a darkened photography studio.  I used a black backdrop and continuous LED lighting to illuminate Carly as she moved across the studio, with a single studio strobe light at the front right part of the set to capture her final pose at the end of the exposure.  The strobe was set to trigger as the camera’s shutter closed at the end of each shot (“rear curtain sync”).  Using this technique, Carly’s motion can be traced during the whole exposure but the vivid exposure is what we see last.  Considerable post-processing is then required to clean up the scene.
  2. Interconnectivity: Carly shared, “I found myself visualizing my body wrapped in cables coiled around my limbs and torso. I was thinking about the body in relation to embedded systems here, and how now all of our world is connected through cables either in the air or sea.”To shoot this visual concept, we used a dance studio with a mirrored wall.  Using different angles to obtain a variety of perspectives, I shot both Carly and her reflection as she improvised motion along the wall.  The technique conveys the impression of the interconnectedness between a human and a similar being across a network.  Post-processing was required to remove clutter and render the background as white.
  3. Human (de-)Evolution Series: A whimsical re-imagining of the classic ape-to-human evolution series, this montage asks us to consider whether technology contributes to or detracts from human evolution.This striking montage is actually quite straightforward to execute.  In a photography studio against a black backdrop, I shot images of Carly in each stage of the evolution series and then combined them using layers in Photoshop.
  4. Robotic Motion: Machines often perform the same tasks traditionally undertaken by humans, but the robot’s motion is constrained by its programming. How might the human dancer’s motion become similarly constrained if her movement is choreographed by programming instructions?For this concept, my job was fairly easy and Carly had to do the heavy lifting.  She choreographed movement indicative of robotic motion while I captured a series of images using a studio strobe light and a black fabric backdrop.

I hope this behind-the-scenes peek at my ongoing passion project will help inspire your own creative process.  It’s important to be personally and deeply invested in a project before you begin.  Select your partner(s) carefully and plan thoroughly.  Then the process becomes joyful and exhilarating as you begin to bring your concept to life!

Have you carried out a photography project?  Please share your key learnings–positive and otherwise–here!

Want to read more posts about what to photography while traveling or near home?  Find them all here: Posts about What to Shoot.

 

Sneak Preview: Behind the scenes on my ongoing Human/Machine Dance Project

How does the human body move when interacting with artificial intelligence technologies? This is the question that my collaborator Carly Lave embarks on a year-long Fulbright grant to research. A talented dancer, choreographer, and scholar, Carly will present her research findings in visual form via a self-choreographed dance solo performance. I approached her to collaborate on a photography series informed by her research goals and artistry as a dancer. We explored several visual themes, each related to the timely question of how we humans will be transformed by increasing immersion into advanced technologies, including virtual reality, robotics, and interconnectivity. To complete this ongoing project, we will continue to collaborate across two continents during Carly’s research fellowship. Stay tuned!

In the meantime, I’d like to share a few early images from this work-in-process along with some words about how they were made.  It can be tricky to imagine a visual concept and realize it via photography, and all the more challenging when the concept is abstract like the ones Carly and I selected.  I’ll present four of our initial concepts, each illustrated with an image and a description of the techniques required to execute it.

  1. Virtual Reality Motion Study: Virtual meets reality as Carly’s body floats through the physical world while her motion is informed by her interactions with the virtual world playing inside her headset.This concept sounds simple but is quite difficult to execute photographically.  We wanted to capture Carly’s physical motion in the real world as she reacts to the experience of the VR world.  Images like these require a long time exposure (here about 15 seconds), which in turn necessitates shooting in a darkened photography studio.  I used a black backdrop and continuous LED lighting to illuminate Carly as she moved across the studio, with a single studio strobe light at the front right part of the set to capture her final pose at the end of the exposure.  The strobe was set to trigger as the camera’s shutter closed at the end of each shot (“rear curtain sync”).  Using this technique, Carly’s motion can be traced during the whole exposure but the vivid exposure is what we see last.  Considerable post-processing is then required to clean up the scene.
  2. Interconnectivity: Carly shared, “I found myself visualizing my body wrapped in cables coiled around my limbs and torso. I was thinking about the body in relation to embedded systems here, and how now all of our world is connected through cables either in the air or sea.”To shoot this visual concept, we used a dance studio with a mirrored wall.  Using different angles to obtain a variety of perspectives, I shot both Carly and her reflection as she improvised motion along the wall.  The technique conveys the impression of the interconnectedness between a human and a similar being across a network.  Post-processing was required to remove clutter and render the background as white.
  3. Human (de-)Evolution Series: A whimsical re-imagining of the classic ape-to-human evolution series, this montage asks us to consider whether technology contributes to or detracts from human evolution.This striking montage is actually quite straightforward to execute.  In a photography studio against a black backdrop, I shot images of Carly in each stage of the evolution series and then combined them using layers in Photoshop.
  4. Robotic Motion: Machines often perform the same tasks traditionally undertaken by humans, but the robot’s motion is constrained by its programming. How might the human dancer’s motion become similarly constrained if her movement is choreographed by programming instructions?For this concept, my job was fairly easy and Carly had to do the heavy lifting.  She choreographed movement indicative of robotic motion while I captured a series of images using a studio strobe light and a black fabric backdrop.

I hope this behind-the-scenes peek at my ongoing passion project will help inspire your own creative process.  It’s important to be personally and deeply invested in a project before you begin.  Select your partner(s) carefully and plan thoroughly.  Then the process becomes joyful and exhilarating as you begin to bring your concept to life!

Have you carried out a photography project?  Please share your key learnings–positive and otherwise–here!

Want to read more posts about what to photography while traveling or near home?  Find them all here: Posts about What to Shoot.

 

Brave New World [Encore Publication]: AI tools for photographers are improving

As a working professional photographer who also spent nearly 30 years as a technology manager/executive, I’ve long had an interest in the intersection between art/creativity and technology.  Recent attempts to marry photography with artificial intelligence have ranged from useful (facial recognition) to silly (Instagram filters) to hype (an expensive camera that several days after you shoot sends you only the images it deems worthy).  But as pattern matching algorithms improve and machine learning becomes more reliable, we are starting to see some amazing applications at the intersection of AI and photography.

I’ve recently been playing around with two good examples from Adobe.  Available only in the online version of Lightroom as “Technology Previews”, these tools enable you to search all your images for specific attributes and to have the AI automatically select what it determines to be your best photos.

To activate these new tools, go to https://lightroom.adobe.com, log in using your Adobe Creative Cloud credentials, and then click on the Lighroom logo in the upper left and select “Technology Previews” from the drop-down menu.  Click the check box next to “Best Photos”, and you’re good to go.

There are two main tools available at this time:

  1. Intelligent Photo Search: This is already very impressive technology.  You can search all or a subset of your images using any natural language term you want.  You could, for example, search all your images for photos of cats, or of mountains, or of dancers, or of waterfalls.  The more specific your search term is, the more accurate the results are likely to be.  When I searched for “waterfall” or for “dancer”, the AI seemed to get many or most of my photos featuring those themes, and only occasionally did it include photos that did not feature those themes.  When my search terms were broader, like “clouds” or “mountains”, the results were less accurate.  Aesthetic searches, say for the color “blue” or the effect of “motion” resulted in mostly accurate selections of images featuring these concepts.  While there are a few false matches, and likely quite a few more errors of omission of images that should have matched, this technology is quite useful in its current state.
  2. Best Photos Selection: This one is more of a work in process.  You can select any of your online galleries and ask the AI to select what it “thinks” the best photos are.  You can move a slider to increase or decrease how selective this tool is.  As a default, it shows you its picks for the top half of your photos, and then you can refine the selectivity to include more or fewer photos.  I tried this advanced technology using several of my recent photo galleries.  In most cases, it included my two or three favorite images in its initial selection of the top half of all the photos, but dropped them from its cut as I increased the selectivity.  In one gallery, for example, an image that was recently selected as a favorite by the editors of “National Geographic” was dropped by Adobe’s AI in the first cut of 10% of the images.  That image was quite artsy and abstract, and it’s not reasonable to expect that a machine could choose it as special.  Yet in another of my galleries, the AI included an image that recently won a major local competition in its final cut of just 1% of the images.  That image is a more traditional landscape that could reasonably be evaluated by a machine as a “good” photograph.

The bottom line here is that the applications of advanced technology to the art of photography are improving at an astonishing rate.  While neither of Adobe’s AI tools is as good as a human artist at selecting images by their features or their quality, both tools are off to an impressive start and one of them (Intelligent Photo Search) is already very usable.  I would not be surprised if, in a year or two, this technology advances to the point where machines can be making decisions about photography along with humans.  Both human and AI evaluations will have their strengths and weaknesses, and I can see them coexisting for the foreseeable future.  I recommend we all, as photographers, get steeped in this advanced technology and prepare for a future in which man and machine will both play a role in sophisticated evaluation of images.

Brave New World: AI tools for photographers are improving

As a working professional photographer who also spent nearly 30 years as a technology manager/executive, I’ve long had an interest in the intersection between art/creativity and technology.  Recent attempts to marry photography with artificial intelligence have ranged from useful (facial recognition) to silly (Instagram filters) to hype (an expensive camera that several days after you shoot sends you only the images it deems worthy).  But as pattern matching algorithms improve and machine learning becomes more reliable, we are starting to see some amazing applications at the intersection of AI and photography.

I’ve recently been playing around with two good examples from Adobe.  Available only in the online version of Lightroom as “Technology Previews”, these tools enable you to search all your images for specific attributes and to have the AI automatically select what it determines to be your best photos.

To activate these new tools, go to https://lightroom.adobe.com, log in using your Adobe Creative Cloud credentials, and then click on the Lighroom logo in the upper left and select “Technology Previews” from the drop-down menu.  Click the check box next to “Best Photos”, and you’re good to go.

There are two main tools available at this time:

  1. Intelligent Photo Search: This is already very impressive technology.  You can search all or a subset of your images using any natural language term you want.  You could, for example, search all your images for photos of cats, or of mountains, or of dancers, or of waterfalls.  The more specific your search term is, the more accurate the results are likely to be.  When I searched for “waterfall” or for “dancer”, the AI seemed to get many or most of my photos featuring those themes, and only occasionally did it include photos that did not feature those themes.  When my search terms were broader, like “clouds” or “mountains”, the results were less accurate.  Aesthetic searches, say for the color “blue” or the effect of “motion” resulted in mostly accurate selections of images featuring these concepts.  While there are a few false matches, and likely quite a few more errors of omission of images that should have matched, this technology is quite useful in its current state.
  2. Best Photos Selection: This one is more of a work in process.  You can select any of your online galleries and ask the AI to select what it “thinks” the best photos are.  You can move a slider to increase or decrease how selective this tool is.  As a default, it shows you its picks for the top half of your photos, and then you can refine the selectivity to include more or fewer photos.  I tried this advanced technology using several of my recent photo galleries.  In most cases, it included my two or three favorite images in its initial selection of the top half of all the photos, but dropped them from its cut as I increased the selectivity.  In one gallery, for example, an image that was recently selected as a favorite by the editors of “National Geographic” was dropped by Adobe’s AI in the first cut of 10% of the images.  That image was quite artsy and abstract, and it’s not reasonable to expect that a machine could choose it as special.  Yet in another of my galleries, the AI included an image that recently won a major local competition in its final cut of just 1% of the images.  That image is a more traditional landscape that could reasonably be evaluated by a machine as a “good” photograph.

The bottom line here is that the applications of advanced technology to the art of photography are improving at an astonishing rate.  While neither of Adobe’s AI tools is as good as a human artist at selecting images by their features or their quality, both tools are off to an impressive start and one of them (Intelligent Photo Search) is already very usable.  I would not be surprised if, in a year or two, this technology advances to the point where machines can be making decisions about photography along with humans.  Both human and AI evaluations will have their strengths and weaknesses, and I can see them coexisting for the foreseeable future.  I recommend we all, as photographers, get steeped in this advanced technology and prepare for a future in which man and machine will both play a role in sophisticated evaluation of images.

Will Photography Soon Be Obsolete? [Encore Publicaton]: Musings on AI as artist

A friend recently pondered via a social media post whether we will have photography as we know it in the future, or if artificial intelligence (AI) will soon generate all of our images.  With tens of millions of people now capturing snapshots on their phones’ cameras and instantly applying AI-generated filters to enhance or modify the images, we can certainly observe an increasing trend toward computer involvement in the making of photos.  But I don’t believe AI will replace the artist’s eye in the making of fine-art photographs for quite some time to come.  Here are a few semi-random musings on this theme.

A machine can certainly generate bad art.  In college in the mid-1980s, I wrote a program for my Computer Science final exam that composed musical canons (pieces in which each voice plays the same melody together, but starting at different times).  My code used a semi-random configuration of musical intervals as the opening melody, then applied a simplified set of the rules of counterpoint (how musical lines are allowed to fit together) to complete the canon.  I received an “A” for this project, but truth to tell, any listener familiar with classical music could instantly discern that the pieces composed by my program weren’t anything like the lovely canons written by Telemann, for example.  In other words, my AI didn’t pass the Turing Test.

In the more than 30 years since I wrote that program, AI has progressed by leaps and bounds.  Computers can now generate poetry, classical and jazz music, and even paintings that many non-experts judge as products of human artistic creativity.  I’m fascinated by the progress, but so far the best of the AI-generated “art” is really just imitation and trickery: it takes a seed of something original such as a photograph or a melody, and transforms it using a set of complex rules that could be described as a pre-programmed artistic style into something pleasant enough but not inspiring.

In his landmark 1979 book, “Gödel, Escher, Bach: An Eternal Golden Braid,” Douglas Hofstadter amazed the world by demonstrating comparable interlocking themes of grace and elegance among the very different disciplines of mathematics, visual art, and music.  He even speculated on the ability of machines to create works of great insight.  But Hofstadter’s proposed approach differed from that of the AI field that has developed since then in that he favored teaching machines to create via an understanding of how the human mind creates, as opposed to today’s AI approach of taking mountains of data and throwing brute force calculations at it.  To my eye, ear, and mind, this brute force method is the reason most of today’s attempts to artificially emulate the creative process are not insightful and do not add anything to their genres.  And so far, the vast majority of these attempts fail their respective Turing Tests.  That is, humans can tell it is a machine and not a human generating the “art.”

Applying these musings to the art of photography, what do see today?  To be sure, more images are being generated today than ever before in human history, and the art of photography is being devalued by its sheer pervasiveness.  Everyone captures images now, and most of them believe that makes them photographers.  While photographers have always required the involvement of a machine in the creation of their art, good photographers have always relied on their artistic vision, the so-called artist’s eye, to create images that are special.  I don’t believe that all the Meitu and similar AI filters that abound today are creating any photographic art that adds insight or helps interpret the world around us.

One very central component in photography is composition.  How does the photographer choose what elements to include in the image, and how will these elements be combined?  Read my recent post on composition here: Post on Composition.  This vital aspect of photography does use some “rules”, such as the Rule of Thirds, Leading Lines, Framing Elements, Point of View, Foreground/Background, and Symmetry and Patterns.  Rules, of course, can be programmed into an AI so that the machine can emulate the way humans create.  But in photographic composition, the “rules” are really just guidelines for getting started.  A good photographer knows when to break the rules for artistic impact.

Even the dumbest devices are capable of generating images.  Security cameras can capture images that we would consider to be rudimentary documentary photographs.  Given long enough, a security camera might accidentally capture what we would consider to be a good street photography image, because after capturing millions of dull scenes, sooner or later the camera will catch a random alignment of interesting elements.  It’s like thousands of monkeys typing random characters: given enough time, one of them will coincidentally type out a Shakespeare sonnet or even a full play.  As wearable computing devices become more pervasive, many people’s lives will be documented in real-time via the capture of millions of images.  Some of these may be interesting to their friends and perhaps the general public.  A few may even have artistic value.  But true artistry isn’t characterized by coincidence.

I don’t doubt that eventually we will get to the point where machines can create images as good as much of what humans can create.  I think we’ll get there, but it will take a long, long time.  And in the meantime, the role of photographer as artist, experimenter, and interpreter of the world around us will continue to be central to our society’s need for communication and expression.

What do you think about the future of photography?  Will we soon see machines creating much of our imagery?  How about our good, artistic imagery?  Please share your thoughts here.

Camera as a Service? [Encore Publication]: A first look at Relonch’s artificial intelligence photography service

In the 150 or so years of its history, the camera has evolved rapidly as a result of the advent of new technologies, each promising to dramatically simplify the photographic process and improve the resulting images.  Just within my lifetime, there have been major upheavals via the introductions of the Kodak Instamatic, the Polaroid SX-70, and of course digital photography.  So perhaps it was inevitable that someone would develop an AI (artificial intelligence) to make photography as simple as pushing a single button.

A company called Relonch (https://relonch.com/) is developing such a system now.  Sometime in 2018, they expect to roll out a Camera as a Service for $99 per month.  You get a loaner of a brightly colored Relonch 291 camera (manufactured by Samsung), with a fixed focal length lens and only a single button, which is used to take the picture.  It doesn’t even have an LCD screen to review your images.  The purported value of the subscription price, of course, is not derived from the camera hardware, but rather from the service.  For this lofty price, your camera transmits your image files to Relonch, who then use algorithms to analyze and process the files.  The next day, they send the processed images that they consider to be your best photos back to your mobile device of choice.

Relonch 291
The Relonch 291 camera

The concept here is that most people are confused by all the settings on their camera, even if it’s a fairly simple point-and-shoot device, so their photos rarely come out the way they envisioned them.  Instead, let them use a simple camera with just a single button, but employ AI techniques to post-process the best images to make them look closer to the way the user intended.

I haven’t tried the camera and its wraparound service yet (the company’s headquarters and showroom are in Palo Alto, near to where I live, so perhaps I can do so soon), but based only on their description of the concept I share my thoughts here.

  1. Will users pay $1200 per year for this service?  I’m skeptical as to whether there is a broad market for the service at this price point.  “Serious” photographers, that is professional and enthusiast amateurs, already know how to use the manual controls on our cameras and enjoy the process of capturing images and enhancing them during post-processing to achieve the final results we want.  Are there enough users who don’t know how to use their cameras but are still willing to pay so much for better images?  Time will tell.
  2. Are people willing to leave the choice of which images they receive up to a software algorithm?  I wouldn’t want someone else, even a top professional photographer, deciding which of my images I get to see and permanently deleting the rest.  And I certainly wouldn’t want an AI to make this decision for me.
  3. Are users okay with waiting a day to see and share their images?  We’ve gotten pretty spoiled as a consumer class.  We expect instant gratification, and ever since the first Polaroid cameras came out in the 1940s, photographers have been able to see their images right away.  Waiting a day may not fly.
  4. Do people really want their photography to be mechanized?  For its whole history, photography has seen its reputation tarnished when compared to other visual art media because a part of this art form includes the use of a mechanical device, the camera.  Just as a great painter creates her art through her vision and her technique, so does a great photographer.  The gear we use is only incidental to the quality of the images we create.  I fear that by taking the craft out of the process and substituting an AI for the artist’s vision, the Relonch service will further degrade photography as an art form.  And let’s be honest here.  An AI can adjust color balance, sharpness, clarity, vibrance, and exposure to improve a raw image, but it can’t determine how to crop or selectively adjust parts of the image to make it artistically pleasing or to give it a story to tell.  And most important of all, no amount of post-processing can turn a poorly composed or an uninteresting image into one worth looking at.  An AI may soon be able to drive our cars from Point A to Point B, be we’re a long way from having an algorithm that can create true visual fine art.  I’ll leave you with the words of master landscape photographer Ansel Adams: “There’s nothing worse than a sharp image of a fuzzy concept.”

What do you think of the Camera as a Service concept?  Valuable evolution of photography that will bring its benefits to a wider range of humanity, or expensive gimmick that will degrade the artistic worth of the medium of photography?  Please share your thoughts here.

Want to read more posts about gear?  Find them all here: Posts on Gear.

Will Photography Soon Be Obsolete? [Encore Publicaton]: Musings on AI as artist

A friend recently pondered via a social media post whether we will have photography as we know it in the future, or if artificial intelligence (AI) will soon generate all of our images.  With tens of millions of people now capturing snapshots on their phones’ cameras and instantly applying AI-generated filters to enhance or modify the images, we can certainly observe an increasing trend toward computer involvement in the making of photos.  But I don’t believe AI will replace the artist’s eye in the making of fine-art photographs for quite some time to come.  Here are a few semi-random musings on this theme.

A machine can certainly generate bad art.  In college in the mid-1980s, I wrote a program for my Computer Science final exam that composed musical canons (pieces in which each voice plays the same melody together, but starting at different times).  My code used a semi-random configuration of musical intervals as the opening melody, then applied a simplified set of the rules of counterpoint (how musical lines are allowed to fit together) to complete the canon.  I received an “A” for this project, but truth to tell, any listener familiar with classical music could instantly discern that the pieces composed by my program weren’t anything like the lovely canons written by Telemann, for example.  In other words, my AI didn’t pass the Turing Test.

In the more than 30 years since I wrote that program, AI has progressed by leaps and bounds.  Computers can now generate poetry, classical and jazz music, and even paintings that many non-experts judge as products of human artistic creativity.  I’m fascinated by the progress, but so far the best of the AI-generated “art” is really just imitation and trickery: it takes a seed of something original such as a photograph or a melody, and transforms it using a set of complex rules that could be described as a pre-programmed artistic style into something pleasant enough but not inspiring.

In his landmark 1979 book, “Gödel, Escher, Bach: An Eternal Golden Braid,” Douglas Hofstadter amazed the world by demonstrating comparable interlocking themes of grace and elegance among the very different disciplines of mathematics, visual art, and music.  He even speculated on the ability of machines to create works of great insight.  But Hofstadter’s proposed approach differed from that of the AI field that has developed since then in that he favored teaching machines to create via an understanding of how the human mind creates, as opposed to today’s AI approach of taking mountains of data and throwing brute force calculations at it.  To my eye, ear, and mind, this brute force method is the reason most of today’s attempts to artificially emulate the creative process are not insightful and do not add anything to their genres.  And so far, the vast majority of these attempts fail their respective Turing Tests.  That is, humans can tell it is a machine and not a human generating the “art.”

Applying these musings to the art of photography, what do see today?  To be sure, more images are being generated today than ever before in human history, and the art of photography is being devalued by its sheer pervasiveness.  Everyone captures images now, and most of them believe that makes them photographers.  While photographers have always required the involvement of a machine in the creation of their art, good photographers have always relied on their artistic vision, the so-called artist’s eye, to create images that are special.  I don’t believe that all the Meitu and similar AI filters that abound today are creating any photographic art that adds insight or helps interpret the world around us.

One very central component in photography is composition.  How does the photographer choose what elements to include in the image, and how will these elements be combined?  Read my recent post on composition here: Post on Composition.  This vital aspect of photography does use some “rules”, such as the Rule of Thirds, Leading Lines, Framing Elements, Point of View, Foreground/Background, and Symmetry and Patterns.  Rules, of course, can be programmed into an AI so that the machine can emulate the way humans create.  But in photographic composition, the “rules” are really just guidelines for getting started.  A good photographer knows when to break the rules for artistic impact.

Even the dumbest devices are capable of generating images.  Security cameras can capture images that we would consider to be rudimentary documentary photographs.  Given long enough, a security camera might accidentally capture what we would consider to be a good street photography image, because after capturing millions of dull scenes, sooner or later the camera will catch a random alignment of interesting elements.  It’s like thousands of monkeys typing random characters: given enough time, one of them will coincidentally type out a Shakespeare sonnet or even a full play.  As wearable computing devices become more pervasive, many people’s lives will be documented in real-time via the capture of millions of images.  Some of these may be interesting to their friends and perhaps the general public.  A few may even have artistic value.  But true artistry isn’t characterized by coincidence.

I don’t doubt that eventually we will get to the point where machines can create images as good as much of what humans can create.  I think we’ll get there, but it will take a long, long time.  And in the meantime, the role of photographer as artist, experimenter, and interpreter of the world around us will continue to be central to our society’s need for communication and expression.

What do you think about the future of photography?  Will we soon see machines creating much of our imagery?  How about our good, artistic imagery?  Please share your thoughts here.

Camera as a Service? [Encore Publication]: A first look at Relonch’s artificial intelligence photography service

In the 150 or so years of its history, the camera has evolved rapidly as a result of the advent of new technologies, each promising to dramatically simplify the photographic process and improve the resulting images.  Just within my lifetime, there have been major upheavals via the introductions of the Kodak Instamatic, the Polaroid SX-70, and of course digital photography.  So perhaps it was inevitable that someone would develop an AI (artificial intelligence) to make photography as simple as pushing a single button.

A company called Relonch (https://relonch.com/) is developing such a system now.  Sometime in 2018, they expect to roll out a Camera as a Service for $99 per month.  You get a loaner of a brightly colored Relonch 291 camera (manufactured by Samsung), with a fixed focal length lens and only a single button, which is used to take the picture.  It doesn’t even have an LCD screen to review your images.  The purported value of the subscription price, of course, is not derived from the camera hardware, but rather from the service.  For this lofty price, your camera transmits your image files to Relonch, who then use algorithms to analyze and process the files.  The next day, they send the processed images that they consider to be your best photos back to your mobile device of choice.

Relonch 291
The Relonch 291 camera

The concept here is that most people are confused by all the settings on their camera, even if it’s a fairly simple point-and-shoot device, so their photos rarely come out the way they envisioned them.  Instead, let them use a simple camera with just a single button, but employ AI techniques to post-process the best images to make them look closer to the way the user intended.

I haven’t tried the camera and its wraparound service yet (the company’s headquarters and showroom are in Palo Alto, near to where I live, so perhaps I can do so soon), but based only on their description of the concept I share my thoughts here.

  1. Will users pay $1200 per year for this service?  I’m skeptical as to whether there is a broad market for the service at this price point.  “Serious” photographers, that is professional and enthusiast amateurs, already know how to use the manual controls on our cameras and enjoy the process of capturing images and enhancing them during post-processing to achieve the final results we want.  Are there enough users who don’t know how to use their cameras but are still willing to pay so much for better images?  Time will tell.
  2. Are people willing to leave the choice of which images they receive up to a software algorithm?  I wouldn’t want someone else, even a top professional photographer, deciding which of my images I get to see and permanently deleting the rest.  And I certainly wouldn’t want an AI to make this decision for me.
  3. Are users okay with waiting a day to see and share their images?  We’ve gotten pretty spoiled as a consumer class.  We expect instant gratification, and ever since the first Polaroid cameras came out in the 1940s, photographers have been able to see their images right away.  Waiting a day may not fly.
  4. Do people really want their photography to be mechanized?  For its whole history, photography has seen its reputation tarnished when compared to other visual art media because a part of this art form includes the use of a mechanical device, the camera.  Just as a great painter creates her art through her vision and her technique, so does a great photographer.  The gear we use is only incidental to the quality of the images we create.  I fear that by taking the craft out of the process and substituting an AI for the artist’s vision, the Relonch service will further degrade photography as an art form.  And let’s be honest here.  An AI can adjust color balance, sharpness, clarity, vibrance, and exposure to improve a raw image, but it can’t determine how to crop or selectively adjust parts of the image to make it artistically pleasing or to give it a story to tell.  And most important of all, no amount of post-processing can turn a poorly composed or an uninteresting image into one worth looking at.  An AI may soon be able to drive our cars from Point A to Point B, be we’re a long way from having an algorithm that can create true visual fine art.  I’ll leave you with the words of master landscape photographer Ansel Adams: “There’s nothing worse than a sharp image of a fuzzy concept.”

What do you think of the Camera as a Service concept?  Valuable evolution of photography that will bring its benefits to a wider range of humanity, or expensive gimmick that will degrade the artistic worth of the medium of photography?  Please share your thoughts here.

Want to read more posts about gear?  Find them all here: Posts on Gear.

Will Photography Soon Be Obsolete? [Encore Publicaton]: Musings on AI as artist

A friend recently pondered via a social media post whether we will have photography as we know it in the future, or if artificial intelligence (AI) will soon generate all of our images.  With tens of millions of people now capturing snapshots on their phones’ cameras and instantly applying AI-generated filters to enhance or modify the images, we can certainly observe an increasing trend toward computer involvement in the making of photos.  But I don’t believe AI will replace the artist’s eye in the making of fine-art photographs for quite some time to come.  Here are a few semi-random musings on this theme.

A machine can certainly generate bad art.  In college in the mid-1980s, I wrote a program for my Computer Science final exam that composed musical canons (pieces in which each voice plays the same melody together, but starting at different times).  My code used a semi-random configuration of musical intervals as the opening melody, then applied a simplified set of the rules of counterpoint (how musical lines are allowed to fit together) to complete the canon.  I received an “A” for this project, but truth to tell, any listener familiar with classical music could instantly discern that the pieces composed by my program weren’t anything like the lovely canons written by Telemann, for example.  In other words, my AI didn’t pass the Turing Test.

In the more than 30 years since I wrote that program, AI has progressed by leaps and bounds.  Computers can now generate poetry, classical and jazz music, and even paintings that many non-experts judge as products of human artistic creativity.  I’m fascinated by the progress, but so far the best of the AI-generated “art” is really just imitation and trickery: it takes a seed of something original such as a photograph or a melody, and transforms it using a set of complex rules that could be described as a pre-programmed artistic style into something pleasant enough but not inspiring.

In his landmark 1979 book, “Gödel, Escher, Bach: An Eternal Golden Braid,” Douglas Hofstadter amazed the world by demonstrating comparable interlocking themes of grace and elegance among the very different disciplines of mathematics, visual art, and music.  He even speculated on the ability of machines to create works of great insight.  But Hofstadter’s proposed approach differed from that of the AI field that has developed since then in that he favored teaching machines to create via an understanding of how the human mind creates, as opposed to today’s AI approach of taking mountains of data and throwing brute force calculations at it.  To my eye, ear, and mind, this brute force method is the reason most of today’s attempts to artificially emulate the creative process are not insightful and do not add anything to their genres.  And so far, the vast majority of these attempts fail their respective Turing Tests.  That is, humans can tell it is a machine and not a human generating the “art.”

Applying these musings to the art of photography, what do see today?  To be sure, more images are being generated today than ever before in human history, and the art of photography is being devalued by its sheer pervasiveness.  Everyone captures images now, and most of them believe that makes them photographers.  While photographers have always required the involvement of a machine in the creation of their art, good photographers have always relied on their artistic vision, the so-called artist’s eye, to create images that are special.  I don’t believe that all the Meitu and similar AI filters that abound today are creating any photographic art that adds insight or helps interpret the world around us.

One very central component in photography is composition.  How does the photographer choose what elements to include in the image, and how will these elements be combined?  Read my recent post on composition here: Post on Composition.  This vital aspect of photography does use some “rules”, such as the Rule of Thirds, Leading Lines, Framing Elements, Point of View, Foreground/Background, and Symmetry and Patterns.  Rules, of course, can be programmed into an AI so that the machine can emulate the way humans create.  But in photographic composition, the “rules” are really just guidelines for getting started.  A good photographer knows when to break the rules for artistic impact.

Even the dumbest devices are capable of generating images.  Security cameras can capture images that we would consider to be rudimentary documentary photographs.  Given long enough, a security camera might accidentally capture what we would consider to be a good street photography image, because after capturing millions of dull scenes, sooner or later the camera will catch a random alignment of interesting elements.  It’s like thousands of monkeys typing random characters: given enough time, one of them will coincidentally type out a Shakespeare sonnet or even a full play.  As wearable computing devices become more pervasive, many people’s lives will be documented in real-time via the capture of millions of images.  Some of these may be interesting to their friends and perhaps the general public.  A few may even have artistic value.  But true artistry isn’t characterized by coincidence.

I don’t doubt that eventually we will get to the point where machines can create images as good as much of what humans can create.  I think we’ll get there, but it will take a long, long time.  And in the meantime, the role of photographer as artist, experimenter, and interpreter of the world around us will continue to be central to our society’s need for communication and expression.

What do you think about the future of photography?  Will we soon see machines creating much of our imagery?  How about our good, artistic imagery?  Please share your thoughts here.

Will Photography Soon Be Obsolete? [Encore Publicaton]: Musings on AI as artist

A friend recently pondered via a social media post whether we will have photography as we know it in the future, or if artificial intelligence (AI) will soon generate all of our images.  With tens of millions of people now capturing snapshots on their phones’ cameras and instantly applying AI-generated filters to enhance or modify the images, we can certainly observe an increasing trend toward computer involvement in the making of photos.  But I don’t believe AI will replace the artist’s eye in the making of fine-art photographs for quite some time to come.  Here are a few semi-random musings on this theme.

A machine can certainly generate bad art.  In college in the mid-1980s, I wrote a program for my Computer Science final exam that composed musical canons (pieces in which each voice plays the same melody together, but starting at different times).  My code used a semi-random configuration of musical intervals as the opening melody, then applied a simplified set of the rules of counterpoint (how musical lines are allowed to fit together) to complete the canon.  I received an “A” for this project, but truth to tell, any listener familiar with classical music could instantly discern that the pieces composed by my program weren’t anything like the lovely canons written by Telemann, for example.  In other words, my AI didn’t pass the Turing Test.

In the more than 30 years since I wrote that program, AI has progressed by leaps and bounds.  Computers can now generate poetry, classical and jazz music, and even paintings that many non-experts judge as products of human artistic creativity.  I’m fascinated by the progress, but so far the best of the AI-generated “art” is really just imitation and trickery: it takes a seed of something original such as a photograph or a melody, and transforms it using a set of complex rules that could be described as a pre-programmed artistic style into something pleasant enough but not inspiring.

In his landmark 1979 book, “Gödel, Escher, Bach: An Eternal Golden Braid,” Douglas Hofstadter amazed the world by demonstrating comparable interlocking themes of grace and elegance among the very different disciplines of mathematics, visual art, and music.  He even speculated on the ability of machines to create works of great insight.  But Hofstadter’s proposed approach differed from that of the AI field that has developed since then in that he favored teaching machines to create via an understanding of how the human mind creates, as opposed to today’s AI approach of taking mountains of data and throwing brute force calculations at it.  To my eye, ear, and mind, this brute force method is the reason most of today’s attempts to artificially emulate the creative process are not insightful and do not add anything to their genres.  And so far, the vast majority of these attempts fail their respective Turing Tests.  That is, humans can tell it is a machine and not a human generating the “art.”

Applying these musings to the art of photography, what do see today?  To be sure, more images are being generated today than ever before in human history, and the art of photography is being devalued by its sheer pervasiveness.  Everyone captures images now, and most of them believe that makes them photographers.  While photographers have always required the involvement of a machine in the creation of their art, good photographers have always relied on their artistic vision, the so-called artist’s eye, to create images that are special.  I don’t believe that all the Meitu and similar AI filters that abound today are creating any photographic art that adds insight or helps interpret the world around us.

One very central component in photography is composition.  How does the photographer choose what elements to include in the image, and how will these elements be combined?  Read my recent post on composition here: Post on Composition.  This vital aspect of photography does use some “rules”, such as the Rule of Thirds, Leading Lines, Framing Elements, Point of View, Foreground/Background, and Symmetry and Patterns.  Rules, of course, can be programmed into an AI so that the machine can emulate the way humans create.  But in photographic composition, the “rules” are really just guidelines for getting started.  A good photographer knows when to break the rules for artistic impact.

Even the dumbest devices are capable of generating images.  Security cameras can capture images that we would consider to be rudimentary documentary photographs.  Given long enough, a security camera might accidentally capture what we would consider to be a good street photography image, because after capturing millions of dull scenes, sooner or later the camera will catch a random alignment of interesting elements.  It’s like thousands of monkeys typing random characters: given enough time, one of them will coincidentally type out a Shakespeare sonnet or even a full play.  As wearable computing devices become more pervasive, many people’s lives will be documented in real-time via the capture of millions of images.  Some of these may be interesting to their friends and perhaps the general public.  A few may even have artistic value.  But true artistry isn’t characterized by coincidence.

I don’t doubt that eventually we will get to the point where machines can create images as good as much of what humans can create.  I think we’ll get there, but it will take a long, long time.  And in the meantime, the role of photographer as artist, experimenter, and interpreter of the world around us will continue to be central to our society’s need for communication and expression.

What do you think about the future of photography?  Will we soon see machines creating much of our imagery?  How about our good, artistic imagery?  Please share your thoughts here.

Camera as a Service? [Encore Publication]: A first look at Relonch’s artificial intelligence photography service

In the 150 or so years of its history, the camera has evolved rapidly as a result of the advent of new technologies, each promising to dramatically simplify the photographic process and improve the resulting images.  Just within my lifetime, there have been major upheavals via the introductions of the Kodak Instamatic, the Polaroid SX-70, and of course digital photography.  So perhaps it was inevitable that someone would develop an AI (artificial intelligence) to make photography as simple as pushing a single button.

A company called Relonch (https://relonch.com/) is developing such a system now.  Sometime in 2018, they expect to roll out a Camera as a Service for $99 per month.  You get a loaner of a brightly colored Relonch 291 camera (manufactured by Samsung), with a fixed focal length lens and only a single button, which is used to take the picture.  It doesn’t even have an LCD screen to review your images.  The purported value of the subscription price, of course, is not derived from the camera hardware, but rather from the service.  For this lofty price, your camera transmits your image files to Relonch, who then use algorithms to analyze and process the files.  The next day, they send the processed images that they consider to be your best photos back to your mobile device of choice.

Relonch 291
The Relonch 291 camera

The concept here is that most people are confused by all the settings on their camera, even if it’s a fairly simple point-and-shoot device, so their photos rarely come out the way they envisioned them.  Instead, let them use a simple camera with just a single button, but employ AI techniques to post-process the best images to make them look closer to the way the user intended.

I haven’t tried the camera and its wraparound service yet (the company’s headquarters and showroom are in Palo Alto, near to where I live, so perhaps I can do so soon), but based only on their description of the concept I share my thoughts here.

  1. Will users pay $1200 per year for this service?  I’m skeptical as to whether there is a broad market for the service at this price point.  “Serious” photographers, that is professional and enthusiast amateurs, already know how to use the manual controls on our cameras and enjoy the process of capturing images and enhancing them during post-processing to achieve the final results we want.  Are there enough users who don’t know how to use their cameras but are still willing to pay so much for better images?  Time will tell.
  2. Are people willing to leave the choice of which images they receive up to a software algorithm?  I wouldn’t want someone else, even a top professional photographer, deciding which of my images I get to see and permanently deleting the rest.  And I certainly wouldn’t want an AI to make this decision for me.
  3. Are users okay with waiting a day to see and share their images?  We’ve gotten pretty spoiled as a consumer class.  We expect instant gratification, and ever since the first Polaroid cameras came out in the 1940s, photographers have been able to see their images right away.  Waiting a day may not fly.
  4. Do people really want their photography to be mechanized?  For its whole history, photography has seen its reputation tarnished when compared to other visual art media because a part of this art form includes the use of a mechanical device, the camera.  Just as a great painter creates her art through her vision and her technique, so does a great photographer.  The gear we use is only incidental to the quality of the images we create.  I fear that by taking the craft out of the process and substituting an AI for the artist’s vision, the Relonch service will further degrade photography as an art form.  And let’s be honest here.  An AI can adjust color balance, sharpness, clarity, vibrance, and exposure to improve a raw image, but it can’t determine how to crop or selectively adjust parts of the image to make it artistically pleasing or to give it a story to tell.  And most important of all, no amount of post-processing can turn a poorly composed or an uninteresting image into one worth looking at.  An AI may soon be able to drive our cars from Point A to Point B, be we’re a long way from having an algorithm that can create true visual fine art.  I’ll leave you with the words of master landscape photographer Ansel Adams: “There’s nothing worse than a sharp image of a fuzzy concept.”

What do you think of the Camera as a Service concept?  Valuable evolution of photography that will bring its benefits to a wider range of humanity, or expensive gimmick that will degrade the artistic worth of the medium of photography?  Please share your thoughts here.

Want to read more posts about gear?  Find them all here: Posts on Gear.

Will Photography Soon Be Obsolete?: Musings on AI as artist

A friend recently pondered via a social media post whether we will have photography as we know it in the future, or if artificial intelligence (AI) will soon generate all of our images.  With tens of millions of people now capturing snapshots on their phones’ cameras and instantly applying AI-generated filters to enhance or modify the images, we can certainly observe an increasing trend toward computer involvement in the making of photos.  But I don’t believe AI will replace the artist’s eye in the making of fine-art photographs for quite some time to come.  Here are a few semi-random musings on this theme.

A machine can certainly generate bad art.  In college in the mid-1980s, I wrote a program for my Computer Science final exam that composed musical canons (pieces in which each voice plays the same melody together, but starting at different times).  My code used a semi-random configuration of musical intervals as the opening melody, then applied a simplified set of the rules of counterpoint (how musical lines are allowed to fit together) to complete the canon.  I received an “A” for this project, but truth to tell, any listener familiar with classical music could instantly discern that the pieces composed by my program weren’t anything like the lovely canons written by Telemann, for example.  In other words, my AI didn’t pass the Turing Test.

In the more than 30 years since I wrote that program, AI has progressed by leaps and bounds.  Computers can now generate poetry, classical and jazz music, and even paintings that many non-experts judge as products of human artistic creativity.  I’m fascinated by the progress, but so far the best of the AI-generated “art” is really just imitation and trickery: it takes a seed of something original such as a photograph or a melody, and transforms it using a set of complex rules that could be described as a pre-programmed artistic style into something pleasant enough but not inspiring.

In his landmark 1979 book, “Gödel, Escher, Bach: An Eternal Golden Braid,” Douglas Hofstadter amazed the world by demonstrating comparable interlocking themes of grace and elegance amount the very different disciplines of mathematics, visual art, and music.  He even speculated on the ability of machines to create works of great insight.  But Hofstadter’s proposed approach differed from that of the AI field that has developed since then in that he favored teaching machines to create via an understanding of how the human mind creates, as opposed to today’s AI approach of taking mountains of data and throwing brute force calculations at it.  To my eye, ear, and mind, this brute force method is the reason most of today’s attempts to artificially emulate the creative process are not insightful and do not add anything to their genres.  And so far, the vast majority of these attempts fail their respective Turing Tests.  That is, humans can tell it is a machine and not a human generating the “art.”

Applying these musings to the art of photography, what do see today?  To be sure, more images are being generated today than ever before in human history, and the art of photography is being devalued by its sheer pervasiveness.  Everyone captures images now, and most of them believe that makes them photographers.  While photographers have always required the involvement of a machine in the creation of their art, good photographers have always relied on their artistic vision, the so-called artist’s eye, to create images that are special.  I don’t believe that all the Meitu and similar AI filters that abound today are creating any photographic art that adds insight or helps interpret the world around us.

One very central component in photography is composition.  How does the photographer choose what elements to include in the image, and how will these elements be combined?  I haven’t written a post in this space yet that specifically covers the topic of composition, but it’s on my list to write and publish soon.  This vital aspect of photography does use some “rules”, such as the Rule of Thirds, Leading Lines, Framing Elements, Point of View, Foreground/Background, and Symmetry and Patterns.  Rules, of course, can be programmed into an AI so that the machine can emulate the way humans create.  But in photographic composition, the “rules” are really just guidelines for getting started.  A good photographer knows when to break the rules for artistic impact.

Even the dumbest devices are capable of generating images.  Security cameras can capture images that we would consider to be rudimentary documentary photographs.  Given long enough, a security camera might accidentally capture what we would consider to be a good street photography image, because after capturing millions of dull scenes, sooner or later the camera will catch a random alignment of interesting elements.  It’s like thousands of monkeys typing random characters: given enough time, one of them will coincidentally type out a Shakespeare sonnet or even a full play.  As wearable computing devices become more pervasive, many people’s lives will be documented in real-time via the capture of millions of images.  Some of these may be interesting to their friends and perhaps the general public.  A few may even have artistic value.  But true artistry isn’t characterized by coincidence.

I don’t doubt that eventually we will get to the point where machines can create images as good as much of what humans can create.  I think we’ll get there, but it will take a long, long time.  And in the meantime, the role of photographer as artist, experimenter, and interpreter of the world around us will continue to be central to our society’s need for communication and expression.

What do you think about the future of photography?  Will we soon see machines creating much of our imagery?  How about our good, artistic imagery?  Please share your thoughts here.

 

Camera as a Service?: A first look at Relonch’s artificial intelligence photography service

In the 150 or so years of its history, the camera has evolved rapidly as a result of the advent of new technologies, each promising to dramatically simplify the photographic process and improve the resulting images.  Just within my lifetime, there have been major upheavals via the introductions of the Kodak Instamatic, the Polaroid SX-70, and of course digital photography.  So perhaps it was inevitable that someone would develop an AI (artificial intelligence) to make photography as simple as pushing a single button.

A company called Relonch (https://relonch.com/) is developing such a system now.  Sometime in 2018, they expect to roll out a Camera as a Service for $99 per month.  You get a loaner of a brightly colored Relonch 291 camera (manufactured by Samsung), with a fixed focal length lens and only a single button, which is used to take the picture.  It doesn’t even have an LCD screen to review your images.  The purported value of the subscription price, of course, is not derived from the camera hardware, but rather from the service.  For this lofty price, your camera transmits your image files to Relonch, who then use algorithms to analyze and process the files.  The next day, they send the processed images that they consider to be your best photos back to your mobile device of choice.

Relonch 291
The Relonch 291 camera

The concept here is that most people are confused by all the settings on their camera, even if it’s a fairly simple point-and-shoot device, so their photos rarely come out the way they envisioned them.  Instead, let them use a simple camera with just a single button, but employ AI techniques to post-process the best images to make them look closer to the way the user intended.

I haven’t tried the camera and its wraparound service yet (the company’s headquarters and showroom are in Palo Alto, near to where I live, so perhaps I can do so soon), but based only on their description of the concept I share my thoughts here.

  1. Will users pay $1200 per year for this service?  I’m skeptical as to whether there is a broad market for the service at this price point.  “Serious” photographers, that is professional and enthusiast amateurs, already know how to use the manual controls on our cameras and enjoy the process of capturing images and enhancing them during post-processing to achieve the final results we want.  Are there enough users who don’t know how to use their cameras but are still willing to pay so much for better images?  Time will tell.
  2. Are people willing to leave the choice of which images they receive up to a software algorithm?  I wouldn’t want someone else, even a top professional photographer, deciding which of my images I get to see and permanently deleting the rest.  And I certainly wouldn’t want an AI to make this decision for me.
  3. Are users okay with waiting a day to see and share their images?  We’ve gotten pretty spoiled as a consumer class.  We expect instant gratification, and ever since the first Polaroid cameras came out in the 1940s, photographers have been able to see their images right away.  Waiting a day may not fly.
  4. Do people really want their photography to be mechanized?  For its whole history, photography has seen its reputation tarnished when compared to other visual art media because a part of this art form includes the use of a mechanical device, the camera.  Just as a great painter creates her art through her vision and her technique, so does a great photographer.  The gear we use is only incidental to the quality of the images we create.  I fear that by taking the craft out of the process and substituting an AI for the artist’s vision, the Relonch service will further degrade photography as an art form.  And let’s be honest here.  An AI can adjust color balance, sharpness, clarity, vibrance, and exposure to improve a raw image, but it can’t determine how to crop or selectively adjust parts of the image to make it artistically pleasing or to give it a story to tell.  And most important of all, no amount of post-processing can turn a poorly composed or an uninteresting image into one worth looking at.  An AI may soon be able to drive our cars from Point A to Point B, be we’re a long way from having an algorithm that can create true visual fine art.  I’ll leave you with the words of master landscape photographer Ansel Adams: “There’s nothing worse than a sharp image of a fuzzy concept.”

What do you think of the Camera as a Service concept?  Valuable evolution of photography that will bring its benefits to a wider range of humanity, or expensive gimmick that will degrade the artistic worth of the medium of photography?  Please share your thoughts here.