sorry in advance beginner question. i'm learning how access web data in python, , i'm having trouble understanding exception handling in requests
package.
so far, when accessing web data using urllib
package, wrap urlopen
call in try/except structure catch bad urls, this:
import urllib, sys url = 'https://httpbintypo.org/' # note typo in url try: uh=urllib.urlopen(url) except: print 'failed open url.' sys.exit() text = uh.read() print text
this kind of crude way it, can mask kinds of problems other bad urls.
from documentation, had sort of gathered avoid try/except structure when using requests
package, this:
import requests, sys url = 'https://httpbintypo.org/' # note typo in url r = requests.get(url) if r.raise_for_status() not none: print 'failed open url.' sys.exit() text = r.text print text
however, doesn't work (throws error , traceback). what's "right" (i.e., simple, elegant, pythonic) way this?
try catch connection error:
from requests.exceptions import connectionerror try: requests.get('https://httpbintypo.org/') except connectionerror: print 'failed open url.'
Comments
Post a Comment