headless-chrome
Here are 340 public repositories matching this topic...
Here is the code snippet:
nightmare
.on('console', (log, msg) => {
console.log(msg)
})
.on('error', (err) => {
console.log(err)
})
.goto(url)
.inject('js', 'jquery.min.js')
.wait('#btnSearchClubs')
.click('#btnSearchClubs')
.wait(5000)
.evaluate(function () {
const pageAnchor = Array.from(document.querySelectorAll(
-
Updated
Nov 13, 2018 - TypeScript
Page Requirements
- ability to add a single page to the archive (like
echo <url> | archivebox add) - ability to import a list of pages / feed of URLs into the archive (like
archivebox add <url>) - link to the homepage of the index
/ - link to the django admin list of URLs for editing the archive
/admin/ - link to the archivebox github repo & documentati
-
Updated
Jul 15, 2020 - HTML
As our caching options became more flexible and caching is an integral part of Rendertron we would like to introduce a different way of handling cache configuration.
Currently, the cache configuration is cache and can either be null, datastore or memory.
The goals of this issue are:
- Change the [
cacheproperty](https://github.com/GoogleChrome/rendertron/blob/0ed866fa98b31884289
What is the current behavior?
Crawling a website that uses # (hashes) for url navigation does not crawl the pages that use #
The urls using # are not followed.
If the current behavior is a bug, please provide the steps to reproduce
Try crawling a website like mykita.com/en/
What is the motivation / use case for changing the behavior?
Though hashes are not ment to chan
-
Updated
May 8, 2020 - Python
Typo in docs
var pgsql = require('pdf-bot/src/db/pgsql')
module.exports = {
api: {
token: 'api-token'
},
db: pgsql({
database: 'pdfbot',
username: 'pdfbot',
password: 'pdfbot',
port: 5432
}),
webhook: {
secret: '1234',
url: 'http://localhost:3000/webhooks/pdf'
}
}
Can you change pls "username" to "user" cause it's the the correct option ther
Also, it's not "@apify/httpRequest" package but "@apify/http-request"
Actual
Test environments don't set up certificates. Using Taiko against these environments produces certificate errors. For example
goto("https://172.0.1.111:1234")
[FAIL] Error: Navigation to url https://172.0.1.111:1234
failed.
REASON: net::ERR_CERT_AUTHORITY_INVALID, run .trace for more info.
Change
Ignore certificates by default. The tester can choose to not ig
I'm trying to follow the quick guide, I've done these steps:
serverless create -u https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/awsexport AWS_PROFILE=serverlessnpm run deploy
However at point 3, I'm not in the right folder am I?
If I cd aws and the run npm run deploy I get an error:
Serverless Error ---------------
-
Updated
Jul 7, 2020 - JavaScript
-
Updated
Jul 15, 2020 - JavaScript
-
Updated
Jul 13, 2020 - JavaScript
Issue
When using SingleBrowserImplementation and chrome gets into a state in which it cannot be restarted then the error does not bubble which causes a javascript unhandledrejection. Since there is no way to catch this it forces consuming code into a dead end. Using node v8.11.1
Reproduction:
I have not found a way to put chrome into such a state that it cannot be restarted so the rep
-
Updated
Jul 11, 2020
-
Updated
Apr 30, 2020 - JavaScript
The graphql version referenced by navalia@1.3.0 and the jest version referenced by create-react-app@1.1.1 seem to be incompatible with one another. Gives an error message:
FAIL src/react.spec.js
● Test suite failed to run
/Users/sgreene/src/tutorials/test-navalia/node_modules/graphql/index.mjs:2
export { graphql, graphqlSync } from './graphql';
^^^^^^
Steps to re
-
Updated
Jul 2, 2020 - Ruby
-
Updated
Apr 12, 2020 - PHP
目前我的方法是拼接, 比如 http://www.A.com, 已知了两个路径: /path_a,/path_b
那么命令为: crawlergo -c chrome http://www.A.com/ http://www.A.com/path_a http://www.A.com/path_b
有两个问题:
- 如果已知路径比较多, 手工拼接比较麻烦
- 这种拼接传参的方法和分开一个个执行得到的结果是一样? 还是说有差别,没有进行验证.
当然后期能有参数支持多路径作为入口最好不过.
Originally posted by @djerrystyle in 0Kee-Team/crawlergo#31 (comment)
Pretty simple stuff: I couldn't find examples for file flag when checking the docs. Also, the info presented when running --help doesn't say much. Ended up checking the file.go to get the proper syntax and then came across this:
$ gowitness file -s ~/Desktop/urls
$ gowitness file --source ~/Desktop/urls --threads -2
so maybe just add it directly to the main docs?
Thanks
-
Updated
Nov 22, 2018 - JavaScript
-
Updated
Apr 5, 2020 - JavaScript
-
Updated
Jul 8, 2020 - Ruby
-
Updated
Nov 6, 2019 - JavaScript
None of the completion triggers worked for my react app. I checked puppeteer's waitUntil: "networkidle0" and that worked for me. https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pagegotourl-options
-
Updated
Jan 18, 2020 - JavaScript
-
Updated
Jun 2, 2020 - JavaScript
Improve this page
Add a description, image, and links to the headless-chrome topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the headless-chrome topic, visit your repo's landing page and select "manage topics."
Tell us about your environment:
What steps will reproduce the problem?
try to pass a promise to
await page.waitForResponse(response => condition)instead ofurlOrPredicate.What is the expected result?
expect async function to work
What happens instead?
no waiting as promise