Wikipedia:Reference desk/Archives/Science/2020 May 3

= May 3 =

How close can you be to a fighter jet passing you at mach 1 without sustaining injuries?
Some closes passes are shown here, but could the people have been injured if these planes had passed the public at, say, 10 meters distance? Count Iblis (talk) 01:22, 3 May 2020 (UTC)
 * Does getting your eardrums blown out count? ←Baseball Bugs What's up, Doc? carrots→ 02:03, 3 May 2020 (UTC)
 * Wouldn't earplugs prevent that? Count Iblis (talk) 03:08, 3 May 2020 (UTC)
 * Maybe, if you knew it was coming and had them to put in. ←Baseball Bugs What's up, Doc? carrots→ 06:55, 3 May 2020 (UTC)
 * See Tourist killed by jet blast at notorious Caribbean airport''.
 * If you get VERY close, This Sailor Got Sucked Inside a Fighter Jet Engine (and survived with minor injuries). Alansplodge (talk) 11:11, 3 May 2020 (UTC)
 * I mean, the scenario is slightly contrived. At some point you're very likely to be injured, though it's always possible to escape unscathed in freak circumstances. A jet traveling at Mach 1 is a large object moving very quickly and doing so using one or more turbofan engine(s). Air is a thing that has mass, and the jet compresses and heats the air in front of it as it travels. The engine also sucks in air and propels air+exhaust out the back. Close to the ground, this will kick up any debris lying around; foreign object damage is a major concern for jet aircraft. If the jet blast hits you directly enough, it will knock you over at least, per Newton's Third Law. The article linked by describes a worst-case scenario for this, and you can find plenty of videos of jet blasts knocking things over. In part this is probably standard human intuition misfiring. We kind of innately think of air as "insubstantial" since it doesn't seem to affect us much, but go stand in a hurricane and you'll see the power of lots of it moving at high speeds. --47.146.63.87 (talk) 19:37, 3 May 2020 (UTC)
 * Could the pressure of the sonic boom be harmful at close distances? Our article on sonic booms does not give data allowing an estimate of the pressure at Mach 1. Assuming that the pressure decreases with the square of the distance, the space shuttle at Mach 1.5 results in 19.44GNd−2 at distance d. This formula cannot be used at extremely close distances, but should be reasonably valid (for the space shuttle at Mach 1.5) at distances of only a few times the length of the vessel. --Lambiam 07:05, 4 May 2020 (UTC)
 * "The strongest sonic boom ever recorded was 144 pounds per square foot and it did not cause injury to the researchers who were exposed to it. The boom was produced by a F-4 flying just above the speed of sound at an altitude of 100 feet"; see Official United States Air Force Website - Factsheets - Sonic Boom. Alansplodge (talk) 11:19, 4 May 2020 (UTC)
 * About 6.4kPa at 30m. Then it is more likely that the effect is inversely proportional to the distance. Reportedly, the human body can withstand 50 psi (345000Pa) for a sudden impact. So a supersonic flyover at a height of 0.55m may be iffy :). --Lambiam 20:30, 4 May 2020 (UTC)

DSLR
People say smartphone cameras are getting closer to DSLR quality but what are the main advantages of DSLR over smartphone? Is it night shots and more control over what’s in focus? 90.196.238.188 (talk) 14:35, 3 May 2020 (UTC)
 * DSLRs have much large aperture than smartphone cameras. Ruslik_ Zero 17:21, 3 May 2020 (UTC)
 * Link: digital single-lens reflex camera. Also optical zoom and interchangeable lenses. There have been moves towards bringing these to phone cameras, and you can always jury-rig a lens attachment to a phone, but dedicated cameras have an advantage in being designed exclusively for photography and thus not needing to compromise for other things like form factor. Some cameras probably have an advantage in battery life, though this will vary widely, and some use swappable batteries while most smartphones these days have internal batteries. (You can still carry a power bank but this adds inconvenience.) --47.146.63.87 (talk) 19:45, 3 May 2020 (UTC)


 * Give it time. I recall 20 or 25 years ago when this same question was being asked about digital vs. film cameras. Digital has since become so good that film is kind of passé. And phone cameras are way much better than they were 10 or 15 years ago. So they're getting there. ←Baseball Bugs What's up, Doc? carrots→ 22:35, 3 May 2020 (UTC)


 * Control of focus, exposure characteristic (aperture and shutter speed), depth of field, ISO, processing style options, high-quality interchangeable lenses, use of filters, flash sync and tripods, multiple exposure capability, ergonomics, dedicated customized processors, fast control change options, raw file formatting, and vastly larger sensors. Full-frame SLR sensors are 24mm x 36mm - phone camera sensors are around 4.2mm x 5.6mm, so for the same resolution the phone camera pixel size is vastly smaller, leading to problems in low light with such high pixel densities. You can get really excellent pictures with a phone camera, but you need to have good timing and some luck, because you're only likely to get one chance. A dedicated camera lets you take multiple shots in quick succession, bracket exposures and fix problems in near real time. The shutter lag with phone cameras is a perennial problem.  Acroterion   (talk)   04:04, 4 May 2020 (UTC)
 * To go a little farther, it's analogous to film cameras - large-format view cameras beat medium-format cameras for image quality (at a sacrifice of time), and a medium-format Hasselblad will beat a 35mm SLR (again, as long as speed is not required). A 16mm Instamatic substantially underperformed a 35mm camera in all respects. Sensor output, whether in film or CCD, always improves with size, all other things being equal - spreading out photons on the film or the sensor produces a better image. And Instamatics were prone to motion blur because they sacrificed ergonomics to the form factor - just as phones aren't designed for steadiness or motion-free shutter release.  Acroterion   (talk)   19:44, 4 May 2020 (UTC)
 * Yes. I've been a SLR/DSLR user for 42 years.  I have problems taking a photo with a phone without it moving and bluring the photo. And I think that instamatics didn't have a focus.  It just set the focus to the hyperfocus distance to get things sort of in focus. I recently read that 4x5" film still beats the best digital. Bubba73 You talkin' to me? 19:59, 4 May 2020 (UTC)


 * My cheap 7 year-old crop sensor DSLR outperforms my brand new state of the art phone camera by a huge margin when taking night pictures. I can put my DSLR on a tripod, take a large number of pictures, and then do a large amount of processing such as averaging out the noise, make up for the crop sensor by stitching a panorama, forego doing demosaicing by interpolation and instead use sensor shifts to get to the actual gray values in the different color channels etc. etc. The resulting picture is of a quality that you would struggle to get using a single shot from a medium frame $100,000 Hasselblad camera.


 * The phone is far more useful to take quick pictures with. It's small, and handy, you can take it out of your pocket at a moment's notice and start shooting with it. The settings are easily adjusted in a few seconds. Also, even if you do have the time to get your DSLR out of the bag and make it ready for use, for certain tasks like shooting video from a plane it's not going to work out well, like these videos I made using my smartphone. Count Iblis (talk) 04:31, 4 May 2020 (UTC)


 * Here are some things I came up with:


 * The more light you can get into a camera, the better the photo. DSLRs have much larger sensors and much larger lenses to gather much more light.  The pixels on a phone are so small and the sensors are so small that they can't gather much light.


 * DSLRs have much better lenses. Small imperfections are magnified on a phone.


 * DSLR lenses are much sharper and have fewer distortions, like chromatic aberation, etc, and probably better contrast, flares, and ghosting.


 * DSLR lenses can go quite a bit more wide angle and much more telephoto.


 * With DSLRs, you can get lenses for macro photography or fisheye lenses. You can also get more specialized lenses like tilt-shift lenses that take out perspective distortion.  (Phones have a huge amount of perspective distortion.)


 * DSLRs have a much wider dynamic range - the difference between the brightest and darkest areas. Details in shadows are lost on phones.


 * With DSLRs you can crop or blow it up and retain details.


 * With DSLR viewfinders, you can easily compose the shot like you want it. You can follow a moving subject naturally.


 * On DSLRs you can pick a point to expose properly and pick a point to be in sharp focus.


 * You have control over the ISO, the aperture, and the shutter speed. Aperture control lets you decide how much in front of or behind the subject is in focus.  Controlling shutter speed lets you freeze a moving subject or let it blur, as you decide. Also with aperture control, you can control sun stars.


 * DSLRs usually have a built-in flash; you can add an external flash; you can control a remote flash.


 * I don't know about ISO on phones, but I suspect that DSLRs can go to much higher ISOs and take photos in much darker conditions.


 * DSLR lenses generally have hoods to keep sunlight from hitting the lens from the side and causing flares.

Bubba73 You talkin' to me? 08:16, 4 May 2020 (UTC)
 * DSLRs can use filters for special effects.


 * I could also list the changeable batteries and the changeable cards for storage. But a person who would have used an Instamatic, a Brownie, or an instant camera in decades past would be happy with a phone camera.  Bubba73 You talkin' to me? 15:23, 4 May 2020 (UTC)
 * Some phones with high end cameras still have add on storage e.g. Huawei phones (albeit using their Nano Memory) and Samsung Galaxies (with microSD) so it's not really a point of difference. Changeable batteries is something that has basically disappeared from high end phones. Most high end camera phones also let you pick a point for exposure and focus; and manually adjust ISO, aperture and shutter speed. Although the interaction of the latter set of features with the semi-AI features they rely on to improve photo quality is complicated and given the nature of camera phones I think even most expert or professional photographers don't bother with too much tweaking. Pretty much all phones except for some very basic ones have built in flashes although of course for size and maybe other reasons even the best phones flashes are not comparable to a DSLR. (I suspect you can control a remote flash but it's not generally a useful feature with a phone.) Edit: Sorry. I am wrong about aperture. You cannot adjust this as it's normally fixed. (You can do "stimulated" adjustment as I mentioned below although this is often a separate feature from the pro modes where you can adjust shutter speed and ISO.) Samsung did try a real variable aperture for 2 generations but abandoned it with their latest phone, which IMO demonstrates it's not very useful for smart phone cameras. [//www.androidheadlines.com/2020/01/galaxy-s20-to-end-samsungs-variable-aperture-experiment.html] [//www.androidcentral.com/galaxy-s20-cameras-what-are-differences-between-models] Nil Einne (talk) 06:14, 5 May 2020 (UTC) 11:33, 5 May 2020 (UTC)

The most serious difference is the size of the sensor. That is not DSLR-specific since there are also compact and so-called "mirrorless" cameras with equally large sensors. If you think of the sensor as a pizza, then pixels are individual slices of pizza. DSLRs and phone cameras are now equivalent in the sense that in both cases, the pizza is cut into say 10 million slices (10 megapixels), but in the DSLR, the pizza itself is 10 times bigger, so the slices are 10 times bigger! They can hold a lot more electrons, giving more dynamic range, lower noise, fewer diffraction effects, etc.. Yet for a long time (and even now), megapixels was an important marketing metric, like selling pizza by how many slices it was cut into rather than its diameter. It's almost as dumb as it sounds. The main thing that has made phone cameras so good over the past decade or so is computational photography: basically the phone sensor takes a relatively crappy image, but then the chips in the phone are able to enhance it til it looks really nice, equal in some regards to a relatively unprocessed DSLR image, with some artifacts here and there. In principle you could apply similar processing to DSLR images, but this is rarely done because of some combination of older technology and having good enough quality already to not be willing to tolerate the artifacts. DSLRs themselves are getting displaced from the high amateur / mid pro market by "mirrorless" which are interchangeable lens cameras with electronic viewfinders (the "reflex" in DSLR refers specifically to a certain type of optical viewfinder where you see through the lens). I haven't followed this stuff too carefully though. I have an old DSLR that's very obsolete by today's standards but is still fine for my purposes. I bought it as soon as I could afford one, and I'm sticking with it. 2602:24A:DE47:B270:DDD2:63E0:FE3B:596C (talk) 09:55, 4 May 2020 (UTC)
 * I don't disagree with anything you've said but I'd suggest "take a relatively crappy image" could cause confusion. Depending on the specific example, the computational photography may involve using multiple crappy images in some fashion. This is often the case for "night modes" on phone cameras when multiple short exposures are combined see e.g. this which mentions Google's Night Sight [//www.theverge.com/2018/11/14/18092660/google-night-sight-review-pixel-2-3-camera-photos-image-quality]. Of course long exposures have been a part of cameras since the very beginning of film, but the classical long exposure is just capturing light for all that time on whatever. Meaning you camera needs to be very still or the light falls on different parts of the frame and you get a blurry image hence tripods etc. Image stabilisation helps but has its limits especially on a phone. So you instead take multiple images and combine them. (As mentioned in The Verge, the same thing had been done for HDR for a while.) Another related example is bokeh on smart phones. These may rely on using the information from a depth sensor (generally a Time-of-flight camera) and then selectively "blurring" the image of the main sensor. Alternatively for phones without such a sensor, they may use the multiple cameras and other information from multiple images to try and determine depth information. And the "blurring" itself could be a completely stimulated out of focus effect, or perhaps it involves adjusting the focus or using multiple cameras to help produce the effect. You can also just use one image, like Google does. (In somewhat the reverse, I thought that some phones can also use Focus stacking but can't seem to find any discussion of this so I'm probably wrong.) Other fancy exposure modes like light trails, silky water etc operate by the same principle. See also [//phys.org/news/2019-01-photos-smartphone-photography.html].  That said, we shouldn't ignore the advances in sensor and other aspects of the camera. See e.g. this which compares Google's long/multiple exposure Night Sight with Huawei's short exposure night photography in the P30 Pro [//www.ubergizmo.com/articles/huawei-p30-pro-low-light-photo-tests/] [//www.anandtech.com/show/14165/the-huawei-p30-p30-pro-reviews-photography-enhanced/8]. I think it's clear this isn't coming just from computational photography advances. Note though even with such short exposures, I'm not sure if we know that they aren't also using multiple images as part of the process. (The P30 still has a long exposure night mode option which is sometimes also used automatically if you turn on the AI feature. I'm guessing the P30 Pro has the same based on [//www.gsmarena.com/huawei_p40_pro-review-2089p5.php].)  Of course with computational photography you also get interesting questions. For example, the P30 moon mode controversy [//www.androidauthority.com/huawei-p30-pro-moon-mode-controversy-978486/] seems to me to be nonsense and the coverage in the English media was often terrible. But one thing which didn't seem to be well discussed, is that if you are using deep learning techniques to enhance your image, at least from my understanding of he these tend to work, you probably don't know that well what it's actually doing without a lot of analysis. Clearly it's easy to tell you're not just substituting a stock photograph, but at deeper level, the difference between "enhancing captured details" and "adding details you think should be there based on other captured details even though they were no where in your capture" may not be so clear cut.  Nil Einne (talk) 12:00, 5 May 2020 (UTC)