Google's "Report a Bug" or "Feedback Tool" lets you select an area of your browser window to create a screenshot that is submitted with your feedback about a bug.
https://i.stack.imgur.com/CDhEi.png
How are they doing this? Google's JavaScript feedback API is loaded from here and their overview of the feedback module will demonstrate the screenshot capability.
JavaScript can read the DOM and render a fairly accurate representation of that using canvas
. I have been working on a script which converts HTML into a canvas image. Decided today to make an implementation of it into sending feedbacks like you described.
The script allows you to create feedback forms which include a screenshot, created on the client's browser, along with the form. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page.
It does not require any rendering from the server, as the whole image is created on the client's browser. The HTML2Canvas script itself is still in a very experimental state, as it does not parse nearly as much of the CSS3 attributes I would want it to, nor does it have any support to load CORS images even if a proxy was available.
Still quite limited browser compatibility (not because more couldn't be supported, just haven't had time to make it more cross browser supported).
For more information, have a look at the examples here:
http://hertzen.com/experiments/jsfeedback/
edit The html2canvas script is now available separately here and some examples here.
edit 2 Another confirmation that Google uses a very similar method (in fact, based on the documentation, the only major difference is their async method of traversing/drawing) can be found in this presentation by Elliott Sprehn from the Google Feedback team: http://www.elliottsprehn.com/preso/fluentconf/
Your web app can now take a 'native' screenshot of the client's entire desktop using getUserMedia()
:
Have a look at this example:
https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/
The client will have to be using chrome (for now) and will need to enable screen capture support under chrome://flags.
Navigator.getUserMedia()
is deprecated, but just below it says "... Please use the newer navigator.mediaDevices.getUserMedia()", i.e. it was just replaced with a newer API.
PoC
As Niklas mentioned you can use the html2canvas library to take a screenshot using JS in the browser. I will extend his answer in this point by providing an example of taking a screenshot using this library ("Proof of Concept"):
function report() { let region = document.querySelector("body"); // whole screen html2canvas(region, { onrendered: function(canvas) { let pngUrl = canvas.toDataURL(); // png in dataURL format let img = document.querySelector(".screen"); img.src = pngUrl; // here you can allow user to set bug-region // and send it with 'pngUrl' to server }, }); } .container { margin-top: 10px; border: solid 1px black; }
In report()
function in onrendered
after getting image as data URI you can show it to the user and allow him to draw "bug region" by mouse and then send a screenshot and region coordinates to the server.
In this example async/await
version was made: with nice makeScreenshot()
function.
UPDATE
Simple example which allows you to take screenshot, select region, describe bug and send POST request (here jsfiddle) (the main function is report()
).
async function report() { let screenshot = await makeScreenshot(); // png dataUrl let img = q(".screen"); img.src = screenshot; let c = q(".bug-container"); c.classList.remove('hide') let box = await getBox(); c.classList.add('hide'); send(screenshot,box); // sed post request with bug image, region and description alert('To see POST requset with image go to: chrome console > network tab'); } // ----- Helper functions let q = s => document.querySelector(s); // query selector helper window.report = report; // bind report be visible in fiddle html async function makeScreenshot(selector="body") { return new Promise((resolve, reject) => { let node = document.querySelector(selector); html2canvas(node, { onrendered: (canvas) => { let pngUrl = canvas.toDataURL(); resolve(pngUrl); }}); }); } async function getBox(box) { return new Promise((resolve, reject) => { let b = q(".bug"); let r = q(".region"); let scr = q(".screen"); let send = q(".send"); let start=0; let sx,sy,ex,ey=-1; r.style.width=0; r.style.height=0; let drawBox= () => { r.style.left = (ex > 0 ? sx : sx+ex ) +'px'; r.style.top = (ey > 0 ? sy : sy+ey) +'px'; r.style.width = Math.abs(ex) +'px'; r.style.height = Math.abs(ey) +'px'; } //console.log({b,r, scr}); b.addEventListener("click", e=>{ if(start==0) { sx=e.pageX; sy=e.pageY; ex=0; ey=0; drawBox(); } start=(start+1)%3; }); b.addEventListener("mousemove", e=>{ //console.log(e) if(start==1) { ex=e.pageX-sx; ey=e.pageY-sy drawBox(); } }); send.addEventListener("click", e=>{ start=0; let a=100/75 //zoom out img 75% resolve({ x:Math.floor(((ex > 0 ? sx : sx+ex )-scr.offsetLeft)*a), y:Math.floor(((ey > 0 ? sy : sy+ey )-b.offsetTop)*a), width:Math.floor(Math.abs(ex)*a), height:Math.floor(Math.abs(ex)*a), desc: q('.bug-desc').value }); }); }); } function send(image,box) { let formData = new FormData(); let req = new XMLHttpRequest(); formData.append("box", JSON.stringify(box)); formData.append("screenshot", image); req.open("POST", '/upload/screenshot'); req.send(formData); } .bug-container { background: rgb(255,0,0,0.1); margin-top:20px; text-align: center; } .send { border-radius:5px; padding:10px; background: green; cursor: pointer; } .region { position: absolute; background: rgba(255,0,0,0.4); } .example { height: 100px; background: yellow; } .bug { margin-top: 10px; cursor: crosshair; } .hide { display: none; } .screen { pointer-events: none }
Get screenshot as Canvas or Jpeg Blob / ArrayBuffer using getDisplayMedia API:
FIX 1: Use the getUserMedia with chromeMediaSource only for Electron.js
FIX 2: Throw error instead return null object
FIX 3: Fix demo to prevent the error: getDisplayMedia must be called from a user gesture handler
// docs: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia
// see: https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/#20893521368186473
// see: https://github.com/muaz-khan/WebRTC-Experiment/blob/master/Pluginfree-Screen-Sharing/conference.js
function getDisplayMedia(options) {
if (navigator.mediaDevices && navigator.mediaDevices.getDisplayMedia) {
return navigator.mediaDevices.getDisplayMedia(options)
}
if (navigator.getDisplayMedia) {
return navigator.getDisplayMedia(options)
}
if (navigator.webkitGetDisplayMedia) {
return navigator.webkitGetDisplayMedia(options)
}
if (navigator.mozGetDisplayMedia) {
return navigator.mozGetDisplayMedia(options)
}
throw new Error('getDisplayMedia is not defined')
}
function getUserMedia(options) {
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
return navigator.mediaDevices.getUserMedia(options)
}
if (navigator.getUserMedia) {
return navigator.getUserMedia(options)
}
if (navigator.webkitGetUserMedia) {
return navigator.webkitGetUserMedia(options)
}
if (navigator.mozGetUserMedia) {
return navigator.mozGetUserMedia(options)
}
throw new Error('getUserMedia is not defined')
}
async function takeScreenshotStream() {
// see: https://developer.mozilla.org/en-US/docs/Web/API/Window/screen
const width = screen.width * (window.devicePixelRatio || 1)
const height = screen.height * (window.devicePixelRatio || 1)
const errors = []
let stream
try {
stream = await getDisplayMedia({
audio: false,
// see: https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamConstraints/video
video: {
width,
height,
frameRate: 1,
},
})
} catch (ex) {
errors.push(ex)
}
// for electron js
if (navigator.userAgent.indexOf('Electron') >= 0) {
try {
stream = await getUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
// chromeMediaSourceId: source.id,
minWidth : width,
maxWidth : width,
minHeight : height,
maxHeight : height,
},
},
})
} catch (ex) {
errors.push(ex)
}
}
if (errors.length) {
console.debug(...errors)
if (!stream) {
throw errors[errors.length - 1]
}
}
return stream
}
async function takeScreenshotCanvas() {
const stream = await takeScreenshotStream()
// from: https://stackoverflow.com/a/57665309/5221762
const video = document.createElement('video')
const result = await new Promise((resolve, reject) => {
video.onloadedmetadata = () => {
video.play()
video.pause()
// from: https://github.com/kasprownik/electron-screencapture/blob/master/index.js
const canvas = document.createElement('canvas')
canvas.width = video.videoWidth
canvas.height = video.videoHeight
const context = canvas.getContext('2d')
// see: https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement
context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight)
resolve(canvas)
}
video.srcObject = stream
})
stream.getTracks().forEach(function (track) {
track.stop()
})
if (result == null) {
throw new Error('Cannot take canvas screenshot')
}
return result
}
// from: https://stackoverflow.com/a/46182044/5221762
function getJpegBlob(canvas) {
return new Promise((resolve, reject) => {
// docs: https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toBlob
canvas.toBlob(blob => resolve(blob), 'image/jpeg', 0.95)
})
}
async function getJpegBytes(canvas) {
const blob = await getJpegBlob(canvas)
return new Promise((resolve, reject) => {
const fileReader = new FileReader()
fileReader.addEventListener('loadend', function () {
if (this.error) {
reject(this.error)
return
}
resolve(this.result)
})
fileReader.readAsArrayBuffer(blob)
})
}
async function takeScreenshotJpegBlob() {
const canvas = await takeScreenshotCanvas()
return getJpegBlob(canvas)
}
async function takeScreenshotJpegBytes() {
const canvas = await takeScreenshotCanvas()
return getJpegBytes(canvas)
}
function blobToCanvas(blob, maxWidth, maxHeight) {
return new Promise((resolve, reject) => {
const img = new Image()
img.onload = function () {
const canvas = document.createElement('canvas')
const scale = Math.min(
1,
maxWidth ? maxWidth / img.width : 1,
maxHeight ? maxHeight / img.height : 1,
)
canvas.width = img.width * scale
canvas.height = img.height * scale
const ctx = canvas.getContext('2d')
ctx.drawImage(img, 0, 0, img.width, img.height, 0, 0, canvas.width, canvas.height)
resolve(canvas)
}
img.onerror = () => {
reject(new Error('Error load blob to Image'))
}
img.src = URL.createObjectURL(blob)
})
}
DEMO:
document.body.onclick = async () => {
// take the screenshot
var screenshotJpegBlob = await takeScreenshotJpegBlob()
// show preview with max size 300 x 300 px
var previewCanvas = await blobToCanvas(screenshotJpegBlob, 300, 300)
previewCanvas.style.position = 'fixed'
document.body.appendChild(previewCanvas)
// send it to the server
var formdata = new FormData()
formdata.append("screenshot", screenshotJpegBlob)
await fetch('https://your-web-site.com/', {
method: 'POST',
body: formdata,
'Content-Type' : "multipart/form-data",
})
}
// and click on the page
Here is a complete screenshot example that works with chrome in 2021. The end result is a blob ready to be transmitted. Flow is: request media > grab frame > draw to canvas > transfer to blob. If you want to do it more memory efficient explore OffscreenCanvas or possibly ImageBitmapRenderingContext
https://jsfiddle.net/v24hyd3q/1/
// Request media
navigator.mediaDevices.getDisplayMedia().then(stream =>
{
// Grab frame from stream
let track = stream.getVideoTracks()[0];
let capture = new ImageCapture(track);
capture.grabFrame().then(bitmap =>
{
// Stop sharing
track.stop();
// Draw the bitmap to canvas
canvas.width = bitmap.width;
canvas.height = bitmap.height;
canvas.getContext('2d').drawImage(bitmap, 0, 0);
// Grab blob from canvas
canvas.toBlob(blob => {
// Do things with blob here
console.log('output blob:', blob);
});
});
})
.catch(e => console.log(e));
Heres an example using: getDisplayMedia
document.body.innerHTML = '<video style="width: 100%; height: 100%; border: 1px black solid;"/>';
navigator.mediaDevices.getDisplayMedia()
.then( mediaStream => {
const video = document.querySelector('video');
video.srcObject = mediaStream;
video.onloadedmetadata = e => {
video.play();
video.pause();
};
})
.catch( err => console.log(`${err.name}: ${err.message}`));
Also worth checking out is the Screen Capture API docs.
You can try my new JS library: screenshot.js.
It's enable to take real screenshot.
You load the script:
<script src="https://raw.githubusercontent.com/amiad/screenshot.js/master/screenshot.js"></script>
and take screenshot:
new Screenshot({success: img => {
// callback function
myimage = img;
}});
You can read more options in project page.
Success story sharing