Skip to content
#

requests

Here are 1,693 public repositories matching this topic...

requests
psiyan
psiyan commented Feb 28, 2020

When using the url http://docs.python-requests.org/en/latest/, it redirects to https://2.python-requests.org//en/latest/ (notice the extra / before en). This causes a HTTP 404.

Expected Result

The redirect should be to https://2.python-requests.org/en/latest/

Actual Result

HTTP 404

Reproduction Steps

Try to visit the latest en documentation for requests using the

marekstodolny
marekstodolny commented Jan 29, 2020

Description
Sending empty files in a multipart POST form is a proper use case due to web browsers supporting it.

I worked on a API layer using guzzle that had to work with an existing legacy codebase and trigger some actions (a proxy of some sort). There was a case where files had to be sent with empty content and would not work if you would omit them.

Example
_Currently no implem

oldani
oldani commented Feb 18, 2019

If you're using proxies with requests-html and rendering JS sites is all good. Once you render a website pyppeteer don't know about this proxies and will expose your IP. This is an undesired behavior when scraping with proxies.

The idea is that whenever someone passes in proxies to the session object or any method call, make pyppeteer also use these proxies. #265

Kamik423
Kamik423 commented Apr 22, 2020

When using browser.get(url, stream=True) it does not in fact stream, but wait for the entire content to download and add it to the soup. One has to do browser.session.get(...).

This should be documented somewhere or there should be a get_without_adding_to_soup. I spent 3 hours debugging this yesterday, maybe this can save someone some time in the future.

https://github.com/MechanicalSo

WesThorburn
WesThorburn commented Mar 12, 2018

Sample Code:

auto r = cpr::Get(cpr::Url{ "https://youtube.com" },cpr::VerifySsl(false));
std::cout << r.url << std::endl; 
std::cout << r.status_code << std::endl; // 200
std::cout << r.text << std::endl;

Output:

https://youtube.com/
0


The above output is identical even if spr::VerifySsl(false) is removed. Request works fine if using HTTP.

I'm running Ubuntu 16.

ArtanisCV
ArtanisCV commented Aug 27, 2019

Currently grequests always sets a filename when creating a multipart post request https://github.com/levigross/grequests/blob/master/request.go#L312 . This leads to inconsistent behaviors with python's requests.

More specifically, lots of web servers (e.g., go's net/http, python's flask) rely on the existence of 'filename' when parsing a multipart form. If 'filename' exists, the field will be

ihipop
ihipop commented Apr 9, 2019

PSR的RequestInterface没有规定实现exec方法,所以我设计组装一个http客户端无关的request的时候,我肯定不能绑定和客户端强相关的exec方法到Request上,因为每个客户端的异常类型、处理逻辑都不相同。

我设计一个composer组件,在组装请求部分,返回了个psr对象,本意是guzzle或者saber等支持PSR标准的HTTP客户端都可以按psr标准把这个对象代表的请求发送出去,现在Guzzle可以做到($guzzleClient->send($PSRrequest))而saber因为把PSR相关的处理逻辑绑定到他自定义的Request上,导致这样的设计没法实施。

🏀 Python3 网络爬虫实战(部分含详细教程)猫眼 腾讯视频 豆瓣 研招网 微博 笔趣阁小说 百度热点 B站 CSDN 网易云阅读 阿里文学 百度股票 今日头条 微信公众号 网易云音乐 拉勾 有道 unsplash 实习僧 汽车之家 英雄联盟盒子 大众点评 链家 LPL赛程 台风 梦幻西游、阴阳师藏宝阁 天气 牛客网 百度文库 睡前故事 知乎 Wish

  • Updated Jun 6, 2020
  • Python
prkumar
prkumar commented Oct 12, 2019

Suggested by @Shanoir on Gitter:

One thing that I can suggest though: Overall the documentation of uplink is very nice. it clearly lays out how the API classes can be set up, and how to use the decorators. However, there is almost no examples available for the "use" of the methods. I think that across the board, providing these usage examples could really help newcomers get onboarded on the l

Improve this page

Add a description, image, and links to the requests topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the requests topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.