This was used in some early-20th-century astronomical setting, I think to detect supernovae. I can't find any documentation now, but my memory is that it was called "blink testing" or something similar, where one switched rapidly between two images of a star field so that changes due to a supernova would stand out.
What does that accomplish? You can just read the web page as-is...
Are you going to share your two screenshots, and provide those instructions, with others? That seems impractical.
Video recording is a bit less impractical, but there you really need a short looping animation to avoid ballooning the file size. An actual readable screenshot has its advantages...
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
That's reminiscent of a (possibly apocryphal?) method I once read about to get "clean" images of normally crowded public places - take multiple photos over time, then median each pixel. Never had the opportunity to try it myself, but I thought it sounded plausible as a way to get rid of transient "noise" from an otherwise static image.
But it only works well if the crowds move out of the way reasonably quickly. If we're taking about areas packed with people all blocking a certain area, and you need hours of shots, the change in ambient lighting over time will have negative effects on the end photo.
Out of sheer curiosity, I put three screenshots of the noise into Claude Opus 4.1, Gemini 2.5 Pro, and GPT 5, all with thinking enabled with the prompt “what does the screen say?”.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a terrible idea.
This was a pseudo-3D game and on an ordinary display it used perspective to simulate 3D like most games. If you had 3D goggles it could use them, but I didn't.
However, it could do a true 3D display on a 2D monitor using a random-dot stereogram.
If you have depth perception and are able to see RDS autostereograms, then Magic Carpet did an animated one. It was a wholly remarkable affect, but for me anyway, it was really hard to watch. It felt like it was trying to rotate my eyeballs in their sockets. Very impressive, but essentially unplayable and I could only watch for a minute or two before I couldn't stand the discomfort any more.
If anybody implements that to antiscrenshot some sensitive data, somebody else will use another phone, a tablet or a camera to record a video of it. Nice idea though.
If you have a very high enough refresh rate display, then yes: just flash alternatingly black-over-white and white-over-black text (i.e. invert it). We perceive essentially a low-pass filtered visual input (with limitations like neural firing rate), eventually it should appear as just uniform gray. Maybe adding some confusing elements might make it feasible at lower refresh rates.
Maybe not exactly what you meant but it reminded me about the following: When one of our apple servers failed a decade ago and just vomitted out walls of error logs too fast to read anything,the apple support guy we called took his smartphone and made some photos to read and fix the error.
You could probably do it with timing tricks related to video refresh. Wait until the monitor has finished refreshing, then draw the text into the framebuffer. Leave the text there a short while, but erase it before the monitor starts refreshing again. Repeat.
The screenshot would have a chance of capturing the text, depending on exactly when the screenshot pulls pixel data out of the framebuffer.
This might not work on certain devices. You need access to the refreshing timing information. The capture mechanism used for screenshots might also vary.
This one is actually more sophisticated because it doesn't rely on scrolling pixels like the OP. So the object doesn't just disappear in screenshots, but also when the animation stops moving! So you can't actually display text that stands still, like the "hello" in the OP.
Yep. He tries text in another video by flipping pixels for one or more frames, so the words disappear very quickly. Definitely harder to read, especially longer words: https://youtu.be/EDQeArrqRZ4
I'm not sure I follow. Couldn't you display text that stands still by (re)drawing the outline of the text repeatedly? It would essentially be a two frame animation
I think the algorithm in the video is doing a very specific thing where there's a zero-width pixel-grid-clamped stroke (picture an etch-a-sketch-like seam carving "between" the bounds of pixels on the grid) moving about the grid, altering (with XOR?) anything it advances across.
So, sure, you could try to implement this by having a seam that is made to "reverberate" back and forth "across" the outlining pixels of a static shape on each frame. But that's not exactly the same thing as selecting the outline of the shape itself and having those pixels update each frame. Given the way this algorithm looks to work, pushing the seam "inwards" vs "outwards" across the same set of pixels forming the outline might gather an entirely different subset of pixels, creating a lot of holes or perhaps double-counting pixels.
And if you fix those problems, then you're not really using this algorithm any more; you're just doing the much-more-boring thing of taking a list of pixel positions forming the outline and updating them each frame. :)
I believe the algorithm in the video works by flipping the pixel color when the pixel changes from foreground (some shape) to background, or from background to foreground. If the shape doesn't move, there is no such change, so it disappears.
In the OP the foreground pixels continuously change (scrolling in this case) while the background doesn't change. That's a different method of separating background and foreground.
Nice one, the good (great?) thing is that you can save this as a plain old html and you've got the whole code :-)
It hasn't got any type of license included or any other info as comments, so perhaps the creator or the OP can let us know.
This is more a curious question for those affected by epilepsy. If you know you are triggered by such things how long an exposure is required to trigger an effect. Are you able notice that media may be triggering and simply close it or is exposure and triggering almost instantaneous?
I saw the game using this rendering weeks ago, looked okay. Now I saw a font and tried to hold on to the edges while reading it, and yes, somehow this made me more (sea) sick. Strange.
Perhaps faces would be strongest in terms of reaction.
So windows 11 easily bypasses this when taking a screenshot. Just switch to video mode. (Yeah yeah. Not technically a screenshot but same default software built in to windows.)
Has anyone tried a long exposure to see if the motion smears into something discernible? Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
That's what I was expecting to see. I didn't have a mount for my phone handy, to try it. The exporting of frames from a video is a good compromise though. nice one
Not the parent but that was not at all clear to me. I immediately thought of taking multiple successive instantaneous screenshots and then stacking them. I'm not sure I would have thought of using a camera within a few minutes to an hour, it's not a tool I would ever reach for normally.
You mean like all of the context I used describing something not a screenshot. Being able to pick up on context clues from the reading is a crucial skill one should have in life. It also makes one look less clueless in conversation when the topics shift quickly and one can keep up.
Oh, so your screenshot utility has "long exposure" and an "ND" filter and "shutter speed" controls, just like a phone's camera? What kind of screenshot utility simulates optical camera effects? What purpose does that serve? Care to share a link to it?
>Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
The hotel you are checking in doesn't need to know your DOB, length, SSN, birth place, validity and document number. But they will demand a photo of the ID anyway.
It seems to depend on reading pixels from a canvas. This is commonly used for fingerprinting users on the web, so you have to disable some privacy plugins.
On my Chrome-descended browser, the initial screen is populated by something that appears to be some sort of downsampled grid image, resulting in black and white, but also various shades of grey. However the scrolling text is pure black and white. It also seems the canvas is persistent, so the result is that text on the canvas is leaving a shadow for me, where I can still read the shadow. Somehow the initial noise is not coming out as just black and white pixels.
Neat! I've seen stuff like this that works as a magic eye thing. So you cross your eyes (or make them parallel, depending on the type of image) and it makes a 3d animation appear in front of the page.
Doesn't even show anything on LibreWolf, probably disabled WebGL as usual. I thought it was a nice error screen, but apparently it was intended, just without the text :P
Another idea I had with this concept is to make an LLM-proof captcha. Maybe humans can detect the characters in the 'motion' itself, which could be unique to us?
- The captcha would be generated like this on a headless browser, and recorded as a video, which is then served to the user.
- We can make the background also move in random directions, to prevent just detecting which pixels are changing and drawing an outline.
- I tried also having the text itself move (bounce like the DVD logo). Somehow makes it even more readable.
I definitely know nothing about how LLMs interpret video, or optics, so please let me know if this is dumb.
Probably the result of canvas fingerprinting protection configured in your `about:config`? With a default profile it seems to work fine on Firefox for Android.
Looks like I consistently get just the static image when I open in a new tab then switch to it, but then if I refresh the page without switching tabs it'll show the animation.
At first I was worried that there was a (stupid) API in web browsers just like on mobiles to prevent users from screenshotting something by blanking the screen in the screenshot.
This idea has made me think of another subject - would it be possible to overload a face / car plate scanning camera by using a pattern, like qr code for exampl? Or a jacket made of qr codes?
It's a nice effect, but I don't think it's usable in practice, because it's not accessible for visually impaired users: not enough contrast between foreground text and background
Could someone please post what this disappeared bit is supposed to look like? Looks legible to me when I screenshot and open in Preview on MacOS 15.6.1 (Firefox).
You are probably browsing with zoom, that seems to screw up the up rendering and makes the background and text look different. It should be just black&white random pixel noise for both background and foreground, without motion the text becomes invisible, as it blends with the background.
> The pattern across any single frame is entirely random noise.
This is untrue in at least one sense. The patterning within the animated letters cycles. It is generated either by evaluating a periodic function or by reading from a file using a periodic offset.
You could do that, but that's not what the page is doing.
You don't even need to maintain the approach of having the pattern within the text move downwards over time. You could redraw it every frame with random data, as if it was television static. It would still be easy to read, as long as the background stayed fixed.
For what it's worth, there are some websites that embed some crazy shit when you screenshot. On reddit, r/CenturyClub will fill your background with a slightly off-white version of your username so that they can identify leakers, and I'm not certain how exactly they do it.
I think further obfuscation could be possible by uglifying the script and providing a SVG path that stores the text as some vector image.
Self modifying code could be useful too, to delete the SVG data once it is in the canvas.
I fully expect this to still be defeated by AI though, such is my presumption that AI is smarter than me, always. It won't care about uglification and it would just laugh to itself at my humble efforts to defeat Skynet.
Regarding practical applications, nowadays kids sell weed online quite brazenly on platforms such as Instagram. Prostitutes also sell their services on Telegram. It is only a matter of time before this type of usage gets clamped down on, so there may come a time when this approach will be needed to thwart the authorities.
yeah - I actually was initially confused since I wasn't having any issues screenshotting it but had forgotten that I have the default site zoom set to ~65%.
Not sure what you mean - I can screenshot it freely that's not the point the point is if you look then at the screenshot you cant discern the text because its a single frame now
This is on MacOS 15.6, Chromium (BrowserOS), captured with the OS' native screenshot utility. Since I was asked about the zoom factor, I now tried simply capturing it at 100% and it was still perfectly readable...
I zoomed out to 90% and could make out something was there but wasn't easy to read. Zooming out further went back to just being noise. I also tried zooming in but with no success. What zoom level did you use and I guess we have to ask the standard what browser/version/OS/etc?? My FFv142 on macOS never took a screen grab like you did
This is really interesting - because it means the "randomness" is different between the text and the background, and when you zoom out enough, the eye can distinguish it?
hmmm I think it's probably just an aliasing / canvas drawing issue. When I bring a screenshot in heavily zoomed out 33% - the pixels comprising the "HELLO" shape have a significantly higher luminance than the rest of the background.
Zooming out before taking screenshot and the text is no longer obfuscated. I tried and confirmed it works. In fact, the text is perhaps even more readable than the original.
It depends how fast or slow your GPU is. I tried it and saw the effect you described, but within a second or two it started moving and was obscured again. Obviously you could automate the problem away.
What I meant was that even if it only freezes for a second, you could automate the screenshots to be captured during that time instead of trying to beat the clock manually
In your phone, just record the screen, then drag the player to see how every still pic blend in within the surroundings, but as soon as it moves it shows up.
If it's even true someone from outsourced support has access to some sensitive security details then using this dumpster is almost like throwing your money out of the window.
You can take TWO screenshots, moments apart, open in GIMP, paste one over the other and choose any one of these laying modes:
Lighten, Screen, Addition, Darken, Multiply, Linear burn, Hard Mix, Difference, Exclusion, Subtract, Grain Extract, Grain Merge, or Luminance.
https://ibb.co/DDQBJDKR
> You can take TWO screenshots, moments apart, open in GIMP, paste one over the other and choose any one of these laying modes:
You actually don't need any image editing skill. Here is a browser-only solution:
1. Take two screenshots.
2. Open these screenshots in two separate tabs on your browser.
3. Switch between tabs very, very quickly (use CTRL-Tab)
Source: tested on Firefox
reminds me of this: https://www.reddit.com/r/LifeProTips/comments/5jdzsx/lpt_use...
I went cross-eyed on my screenshot, and I couldnt read the word, but I did notice some artifacts
This was used in some early-20th-century astronomical setting, I think to detect supernovae. I can't find any documentation now, but my memory is that it was called "blink testing" or something similar, where one switched rapidly between two images of a star field so that changes due to a supernova would stand out.
https://en.m.wikipedia.org/wiki/Blink_comparator
That's it exactly! Thanks.
What does that accomplish? You can just read the web page as-is...
Are you going to share your two screenshots, and provide those instructions, with others? That seems impractical.
Video recording is a bit less impractical, but there you really need a short looping animation to avoid ballooning the file size. An actual readable screenshot has its advantages...
> use CTRL-Tab
Thank you forever for this, I ever used Ctrl-Page up/down for that.
You could also just record a video.
Hah, indeed, that was my first thought. This is clearly for fun though, it’s a cool project idea
I've found taking two screenshots and adding them as separate layers works well, and then setting one as Difference, and then tweaking the opacity.
Here it is in Pixelmator Pro: https://i.moveything.com/299930fb6174.mp4
Is it possible to modify the webpage to make the pattern of the text go down and the pattern of the background do up?
Yes: https://jsfiddle.net/kx6stbcL/
Neat idea.
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
That's reminiscent of a (possibly apocryphal?) method I once read about to get "clean" images of normally crowded public places - take multiple photos over time, then median each pixel. Never had the opportunity to try it myself, but I thought it sounded plausible as a way to get rid of transient "noise" from an otherwise static image.
That's a real method:
https://digital-photography-school.com/taking-photos-in-busy...
https://petapixel.com/2019/09/18/how-to-shoot-people-free-ph...
But it only works well if the crowds move out of the way reasonably quickly. If we're taking about areas packed with people all blocking a certain area, and you need hours of shots, the change in ambient lighting over time will have negative effects on the end photo.
Ah, that's the method indeed! Thanks!
Bottom layer normal, second layer grain extract, top layer vivid light. This completely blacks out the whole area outside of the text.
Out of sheer curiosity, I put three screenshots of the noise into Claude Opus 4.1, Gemini 2.5 Pro, and GPT 5, all with thinking enabled with the prompt “what does the screen say?”.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a terrible idea.
> pasting images of seemingly random noise into a sensitive environment is a terrible idea
BLIT protection. https://www.infinityplus.co.uk/stories/blit.htm
> pasting images of seemingly random noise into a sensitive environment is a terrible idea.
Only if your rendering libraries are crap.
I think they mean prompt injection rather than some malformed image to trigger a security bug in the processing library
The LLM is the image processing library in this case so you are both right :)
Computer vision mode: and each screenshot together.
But then that would be a video, not a screenshot
Layered images do not a video make. Sequential images distributed over time do.
[dead]
Yeah if this became popular, we'd have another Show HN for a tool that automated that.
Or just copy the text from the url. Not very secure, really. :D
Or just ... record a video of the screen.
What tool do you use to make such a video ?
This game disappears if you pause it: https://youtube.com/watch?v=Bg3RAI8uyVw
This is great - seems to be the same effect of hiding a shape using an animated noise pattern on a background of static noise.
They even provide the source code for the effect:
https://github.com/brantagames/noise-shader
Interesting that the perception of objects/text does not disappear immediately, there is smooth fade out.
Not really a game, but neat all the same.
It reminds me of the mid-1990s video game Magic Carpet.
https://en.wikipedia.org/wiki/Magic_Carpet_(video_game)
This was a pseudo-3D game and on an ordinary display it used perspective to simulate 3D like most games. If you had 3D goggles it could use them, but I didn't.
However, it could do a true 3D display on a 2D monitor using a random-dot stereogram.
https://en.wikipedia.org/wiki/Random_dot_stereogram
If you have depth perception and are able to see RDS autostereograms, then Magic Carpet did an animated one. It was a wholly remarkable affect, but for me anyway, it was really hard to watch. It felt like it was trying to rotate my eyeballs in their sockets. Very impressive, but essentially unplayable and I could only watch for a minute or two before I couldn't stand the discomfort any more.
I played the game, but had no idea about that feature.
Also playable in the browser: https://playclassic.games/games/action-dos-games-online/play...
Yes - I was thinking of this. It solves various complicated problems such as rendering distance information in this format.
First time seeing this, makes me smile involuntarily.
See also: Lost in the Static
https://silverspaceship.com/static/
Good one. Just found the game I was trying to find for the initial comment: "No Signal" (https://www.tiktok.com/@teekenng/video/7520954215116639496)
Really clever use of a TV remote as controller.
This is great. The sphere example looks especially pleasing. It also reminds me of the game The Voidness.
Reminds me a bit of the album cover of _Any Minute Now_ by Soulwax
https://upload.wikimedia.org/wikipedia/en/a/ab/AnyMinuteNow....
gotta squint to see it
I first saw this effect in a video from Branta Games.
https://www.youtube.com/watch?v=Bg3RAI8uyVw
The effect is disrupted by introducing rendering artifacts, by watching the video in 144p or in this case by zooming out.
I'd love to know the name of this effect, so I can read more about the fMRI studies that make use of it.
What I've found so far:
Random Dot Kinematogram
Perceptual Organization from Motion (video of Flounder camouflage)
https://www.youtube.com/watch?v=2VO10eDIyiE
If anybody implements that to antiscrenshot some sensitive data, somebody else will use another phone, a tablet or a camera to record a video of it. Nice idea though.
It's just adding friction: Someone determined will figure out a way to get the text.
Sometimes friction is enough.
Or the same one.
While a screencap image hides the message, a screencap video shows it perfectly well.
[dead]
I'm wondering. Can we also come up with something the other way around? Text you cannot read, unless you take a screenshot?
If you have a very high enough refresh rate display, then yes: just flash alternatingly black-over-white and white-over-black text (i.e. invert it). We perceive essentially a low-pass filtered visual input (with limitations like neural firing rate), eventually it should appear as just uniform gray. Maybe adding some confusing elements might make it feasible at lower refresh rates.
Maybe not exactly what you meant but it reminded me about the following: When one of our apple servers failed a decade ago and just vomitted out walls of error logs too fast to read anything,the apple support guy we called took his smartphone and made some photos to read and fix the error.
You could probably do it with timing tricks related to video refresh. Wait until the monitor has finished refreshing, then draw the text into the framebuffer. Leave the text there a short while, but erase it before the monitor starts refreshing again. Repeat.
The screenshot would have a chance of capturing the text, depending on exactly when the screenshot pulls pixel data out of the framebuffer.
This might not work on certain devices. You need access to the refreshing timing information. The capture mechanism used for screenshots might also vary.
I don't see any text, just something like scrolling white noise. I'm able to screenshot it. Am I missing something?
Yes, there is a “Hello” in the noise, made visible through contrast and animation.
https://gist.github.com/jncornett/d7cb397ce3ceff268a0ee1b86f...
On iPhone: screenrecord. Take screenshots every couple seconds. Overlay images with 50% transparency (I use Procreate Pocket for this part)
A single photo is good enough as long as the exposure time is long enough to capture the motion blur.
On Android: Take a look at the URL, see the text in plain-text :)
Nice. I did not think to look there.
Others have mentioned Branta Games, but I first saw the effect here: https://youtu.be/TdTMeNXCnTs
thanks, that's also the best explained one!
This one is actually more sophisticated because it doesn't rely on scrolling pixels like the OP. So the object doesn't just disappear in screenshots, but also when the animation stops moving! So you can't actually display text that stands still, like the "hello" in the OP.
Yep. He tries text in another video by flipping pixels for one or more frames, so the words disappear very quickly. Definitely harder to read, especially longer words: https://youtu.be/EDQeArrqRZ4
I'm not sure I follow. Couldn't you display text that stands still by (re)drawing the outline of the text repeatedly? It would essentially be a two frame animation
I think the algorithm in the video is doing a very specific thing where there's a zero-width pixel-grid-clamped stroke (picture an etch-a-sketch-like seam carving "between" the bounds of pixels on the grid) moving about the grid, altering (with XOR?) anything it advances across.
So, sure, you could try to implement this by having a seam that is made to "reverberate" back and forth "across" the outlining pixels of a static shape on each frame. But that's not exactly the same thing as selecting the outline of the shape itself and having those pixels update each frame. Given the way this algorithm looks to work, pushing the seam "inwards" vs "outwards" across the same set of pixels forming the outline might gather an entirely different subset of pixels, creating a lot of holes or perhaps double-counting pixels.
And if you fix those problems, then you're not really using this algorithm any more; you're just doing the much-more-boring thing of taking a list of pixel positions forming the outline and updating them each frame. :)
I believe the algorithm in the video works by flipping the pixel color when the pixel changes from foreground (some shape) to background, or from background to foreground. If the shape doesn't move, there is no such change, so it disappears.
In the OP the foreground pixels continuously change (scrolling in this case) while the background doesn't change. That's a different method of separating background and foreground.
As soon as I read the title I knew it would be akin to "Bad Apple that disappears when you pause it"
https://www.youtube.com/watch?v=bVLwYa46Cf0
And another version of this, using apples instead of white noise
https://www.youtube.com/watch?v=r40AvHs3uJE
Nice one, the good (great?) thing is that you can save this as a plain old html and you've got the whole code :-) It hasn't got any type of license included or any other info as comments, so perhaps the creator or the OP can let us know.
This should have an epilepsy warning. Or something of that kind. It certainly made me feel sick.
This is more a curious question for those affected by epilepsy. If you know you are triggered by such things how long an exposure is required to trigger an effect. Are you able notice that media may be triggering and simply close it or is exposure and triggering almost instantaneous?
I saw the game using this rendering weeks ago, looked okay. Now I saw a font and tried to hold on to the edges while reading it, and yes, somehow this made me more (sea) sick. Strange.
Perhaps faces would be strongest in terms of reaction.
Oh yes please add a warning. My brain is burning right now!
So windows 11 easily bypasses this when taking a screenshot. Just switch to video mode. (Yeah yeah. Not technically a screenshot but same default software built in to windows.)
This makes me feel motion-sick, which is kind of impressive because I'm normally not easily susceptible to that.
My eyes went straight into seeing 3D image mode. It's the easiest one I've seen yet! /s
Hello fellow person from the 90s. mine eyes did the same too.
Heh my eyes felt like they started bleeding
"The text disappears..." And my eyesight with it
Has anyone tried a long exposure to see if the motion smears into something discernible? Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
Here's the screen recording version of a long exposure (thanks for the nerd snipe) - https://gist.github.com/spro/7599415b0e47de65311557b3454771a...
Perhaps this technique could be defeated by scrolling the background in the opposite direction as the text
That's what I was expecting to see. I didn't have a mount for my phone handy, to try it. The exporting of frames from a video is a good compromise though. nice one
If you zoom out to 25 % the text is clearly visible and screenshottable.
Probably the lower frequencies of noise are not matched? Not sure if the frequencies of the order of movement frequency can actually be matched
How do you take a “long exposure” screenshot? Isn’t every screenshot a perfect digital copy of a single frame or a full on video?
Clearly, I meant using a camera, and I'm guessing you knew that too
Not the parent but that was not at all clear to me. I immediately thought of taking multiple successive instantaneous screenshots and then stacking them. I'm not sure I would have thought of using a camera within a few minutes to an hour, it's not a tool I would ever reach for normally.
I just did this with 50% transparency. It works
Also not the parent but how the hell did you not understand what "long exposure" means ffs
Because the context is about screenshots and context matters
"ffs".
You mean like all of the context I used describing something not a screenshot. Being able to pick up on context clues from the reading is a crucial skill one should have in life. It also makes one look less clueless in conversation when the topics shift quickly and one can keep up.
None of this warrants the type of response they got, nor your attitude.
Periods go inside of quotes, even mealy mouthed shock quotes because an internet abbreviation made you upset.
Nah it's your attitude that brings nothing worthwhile.
Oh, so your screenshot utility has "long exposure" and an "ND" filter and "shutter speed" controls, just like a phone's camera? What kind of screenshot utility simulates optical camera effects? What purpose does that serve? Care to share a link to it?
>Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
https://en.wikipedia.org/wiki/Neutral-density_filter
https://en.wikipedia.org/wiki/Shutter_speed
I think there are usecases for this.
Some countries switched to identity apps instead of plastic identity cards. You could make sensitive data non-screenshottable and non-photographable.
A modern variant to the passport anti identity fraud cover: https://merk.anwb.nl/transform/a9b4e52a-9ba1-414b-b199-29085...
The hotel you are checking in doesn't need to know your DOB, length, SSN, birth place, validity and document number. But they will demand a photo of the ID anyway.
> You could make sensitive data non-screenshottable and non-photographable.
That made me curious, so I took a photo of my laptop screen running this page.
With default camera settingse, the text wasn't visible to me in the photo on my phone screen.
However, setting the exposure time manually to 0.5s, the text came out white on a noisy background and I could easily read it on the phone screen.
I would not be surprised if the default camera settings photo could be processed ("enhance!") to make the text visible, but I didn't try.
I think it also depends on the response time of the display and even the temperature.
Instead of having the pixels on the letters scrolling down, wouldn't it also work if the pixels were simply re-randomized every frame?
yes
I don't see any text: just a scrolling down screen of random black/white pixels.
It seems to depend on reading pixels from a canvas. This is commonly used for fingerprinting users on the web, so you have to disable some privacy plugins.
On my Chrome-descended browser, the initial screen is populated by something that appears to be some sort of downsampled grid image, resulting in black and white, but also various shades of grey. However the scrolling text is pure black and white. It also seems the canvas is persistent, so the result is that text on the canvas is leaving a shadow for me, where I can still read the shadow. Somehow the initial noise is not coming out as just black and white pixels.
You can pass in different text. eg:
https://unscreenshottable.vercel.app/?text=Bonjour
Neat! I've seen stuff like this that works as a magic eye thing. So you cross your eyes (or make them parallel, depending on the type of image) and it makes a 3d animation appear in front of the page.
I’d like to see an example!
Doesn't even show anything on LibreWolf, probably disabled WebGL as usual. I thought it was a nice error screen, but apparently it was intended, just without the text :P
Seems to work if you disable canvas fingerprinting protection.
Another idea I had with this concept is to make an LLM-proof captcha. Maybe humans can detect the characters in the 'motion' itself, which could be unique to us?
- The captcha would be generated like this on a headless browser, and recorded as a video, which is then served to the user.
- We can make the background also move in random directions, to prevent just detecting which pixels are changing and drawing an outline.
- I tried also having the text itself move (bounce like the DVD logo). Somehow makes it even more readable.
I definitely know nothing about how LLMs interpret video, or optics, so please let me know if this is dumb.
I don't think we need more capable people thinking of silly captchas.
Take N screenshots, XOR them pairwise, OR the results, then perform normal OCR.
Yes but this is prohibitively expensive for a large bot network to employ.
Wasn't that the whole point of Anubis?
As if captchas aren't painful enough for visually impaired users...
Fun!
I always wanted to make text that couldn't be recorded with a video recorder, but that doesn't seem possible.
Maybe if you knew the exact framerate that the camera was recording at, you could do the same trick, but I don't think cameras are that consistent.
It also dissapear if you shake your phone (or computer screen but it's harder)
Cool. I used the Windows snipping tool and just screen-recorded it.
Firefox on Android seems to just be a static image, I can't see any text.
Probably the result of canvas fingerprinting protection configured in your `about:config`? With a default profile it seems to work fine on Firefox for Android.
I haven't changed any of that on here.
Looks like I consistently get just the static image when I open in a new tab then switch to it, but then if I refresh the page without switching tabs it'll show the animation.
Wfm
I have to admit it's a pretty cool idea.
At first I was worried that there was a (stupid) API in web browsers just like on mobiles to prevent users from screenshotting something by blanking the screen in the screenshot.
Not technically a screenshot, I guess, but trivially easy to do with software I had lying around all the same. https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExYXloZ3Z0NT...
This idea has made me think of another subject - would it be possible to overload a face / car plate scanning camera by using a pattern, like qr code for exampl? Or a jacket made of qr codes?
Reminds me of dazzle camouflage.
https://en.wikipedia.org/wiki/Dazzle_camouflage
This would make for a great effect for a technothriller. Like a cyber ransom or something like that.
It's a nice effect, but I don't think it's usable in practice, because it's not accessible for visually impaired users: not enough contrast between foreground text and background
Could someone please post what this disappeared bit is supposed to look like? Looks legible to me when I screenshot and open in Preview on MacOS 15.6.1 (Firefox).
You are probably browsing with zoom, that seems to screw up the up rendering and makes the background and text look different. It should be just black&white random pixel noise for both background and foreground, without motion the text becomes invisible, as it blends with the background.
Ha cool! How’s it work?
The only way to see the text is in the movement. The pattern across any single frame is entirely random noise.
> The pattern across any single frame is entirely random noise.
This is untrue in at least one sense. The patterning within the animated letters cycles. It is generated either by evaluating a periodic function or by reading from a file using a periodic offset.
Can't it be continuous random noise added at the top and then moved down each frame.
Roughly you create another full size rect. On each frame add a random pixel on row 1 and shift everything down.
Make that rest a layer below the top one which has Hello cut out as transparent.
In any single frame the result is random noise.
You could do that, but that's not what the page is doing.
You don't even need to maintain the approach of having the pattern within the text move downwards over time. You could redraw it every frame with random data, as if it was television static. It would still be easy to read, as long as the background stayed fixed.
They didn't mean random noise as in certifiably truly random in a cryptographic sense... nobody cares about that for a silly demo.
Random noise as in a normal non-tech human cannot see anything discernable to them at all, without the motion component.
It's not a great visual, but like this: https://michaelbach.de/ot/cog-Dalmatian/
Even is some have found a workaround, this is a cool feature
What i am supposed to see here? Its just static noisy background
Had the same in LibreWolf under Manjaro Linux. Worked in Chrome.
Animation, but only inside a border that is the letters of Hello.
Appreciate that it handles emoji as well. Can't distinguish between smileys though.
I also appreciate that Hn removes emojis from comments. :'(
I uploaded two images to ChatGPT and asked it to XOR them and give me the result in text.
Yeah but the randomness may produce all kinds of NSFW stuff ...
Also, it's even harder to read than most captchas.
But fun idea, it was nice to see.
Was this made with v0?
you can unzoom and then it's screenshotable.
You can also break it by recording the screen, of course.
same thing, but a game: https://brantagames.itch.io/motus
This could be used for Captcha systems. Would current bots be able to decipher these?
Yes, you can make ChatGPT decipher this already.
But doing this on a massive scale would warm the planet.
And it's not friendly accessibility-wise.
a good benchmark for video understanding in IA
Sure, but I can just record a video instead. It doesn’t disappear then!
For what it's worth, there are some websites that embed some crazy shit when you screenshot. On reddit, r/CenturyClub will fill your background with a slightly off-white version of your username so that they can identify leakers, and I'm not certain how exactly they do it.
Fun side effect: staring at the letters for a bit makes the rest of the image move.
If you blink really fast, the text almost disappears.
firefox on linux with a bunch of css stuff set to defaults or none !important shows a static image
> let textString = `hello`
I think further obfuscation could be possible by uglifying the script and providing a SVG path that stores the text as some vector image.
Self modifying code could be useful too, to delete the SVG data once it is in the canvas.
I fully expect this to still be defeated by AI though, such is my presumption that AI is smarter than me, always. It won't care about uglification and it would just laugh to itself at my humble efforts to defeat Skynet.
Regarding practical applications, nowadays kids sell weed online quite brazenly on platforms such as Instagram. Prostitutes also sell their services on Telegram. It is only a matter of time before this type of usage gets clamped down on, so there may come a time when this approach will be needed to thwart the authorities.
This is good but I feel it can somehow be made better!
I like the idea of motion revealing things out of randomness and screenshots are random.
You can just take a screencast though hehe
Ultimately people will just take photos of the screen. Seems like you’re just annoying people.
I feel like there’s an ethical issue. If something is on my screen I own it. I know the law doesn’t agree but it feels right to me.
The point is that it's noise and you can't "capture" a still image of the text / information (relies on motion to be viewable).
We figured out how to capture video though. And ChatGPT can already decipher this.
Had a lot of fun trying to break this. Turns out you can screenshot real easily by zooming out. Maybe there are other ways but I stopped trying :)
yeah - I actually was initially confused since I wasn't having any issues screenshotting it but had forgotten that I have the default site zoom set to ~65%.
Not sure what you mean - I can screenshot it freely that's not the point the point is if you look then at the screenshot you cant discern the text because its a single frame now
He's right. This is zoomed out: https://imgur.com/a/G7CKZ94
This is on MacOS 15.6, Chromium (BrowserOS), captured with the OS' native screenshot utility. Since I was asked about the zoom factor, I now tried simply capturing it at 100% and it was still perfectly readable...
I guess the trick doesn't work on this browser.
I zoomed out to 90% and could make out something was there but wasn't easy to read. Zooming out further went back to just being noise. I also tried zooming in but with no success. What zoom level did you use and I guess we have to ask the standard what browser/version/OS/etc?? My FFv142 on macOS never took a screen grab like you did
This is really interesting - because it means the "randomness" is different between the text and the background, and when you zoom out enough, the eye can distinguish it?
hmmm I think it's probably just an aliasing / canvas drawing issue. When I bring a screenshot in heavily zoomed out 33% - the pixels comprising the "HELLO" shape have a significantly higher luminance than the rest of the background.
Zooming out before taking screenshot and the text is no longer obfuscated. I tried and confirmed it works. In fact, the text is perhaps even more readable than the original.
It depends how fast or slow your GPU is. I tried it and saw the effect you described, but within a second or two it started moving and was obscured again. Obviously you could automate the problem away.
Mine freezes the animation on zoom change. Not sure you could automate against that
What I meant was that even if it only freezes for a second, you could automate the screenshots to be captured during that time instead of trying to beat the clock manually
but screen recording works :)
In your phone, just record the screen, then drag the player to see how every still pic blend in within the surroundings, but as soon as it moves it shows up.
The text reappears when I screenshot it twice.
Screnshotted fine in Xfce.
Seems trivial to diff multiple screenshots to identify what parts move. Or just use a compression algorithm to do the same.
Would 2 screenshots be enough, I wonder?
Yeah, the letters are big enough, an xor shows the text quite clearly.
"you cannot screenshot this already illegible mess of white noise"
Coinbase was hacked for $400M when literally someone from outsourced support services was taking screenshots on their phone!
The culprit had more than 10k photos of all security details for thousands of wealthy customers.
If it's even true someone from outsourced support has access to some sensitive security details then using this dumpster is almost like throwing your money out of the window.