repo
stringclasses 32
values | instance_id
stringlengths 13
37
| base_commit
stringlengths 40
40
| patch
stringlengths 1
1.89M
| test_patch
stringclasses 1
value | problem_statement
stringlengths 304
69k
| hints_text
stringlengths 0
246k
| created_at
stringlengths 20
20
| version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value | traceback
stringlengths 64
23.4k
| __index_level_0__
int64 29
19k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ytdl-org/youtube-dl | ytdl-org__youtube-dl-30577 | af9e72507ea38e5ab3fa2751ed09ec88021260cb | diff --git a/youtube_dl/extractor/youtube.py b/youtube_dl/extractor/youtube.py
--- a/youtube_dl/extractor/youtube.py
+++ b/youtube_dl/extractor/youtube.py
@@ -416,6 +416,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
(?:.*?\#/)? # handle anchor (#/) redirect urls
(?: # the various things that can precede the ID:
(?:(?:v|embed|e)/(?!videoseries)) # v/ or embed/ or e/
+ |shorts/
|(?: # or the v= param in all its forms
(?:(?:watch|movie)(?:_popup)?(?:\.php)?/?)? # preceding watch(_popup|.php) or nothing (like /?v=xxxx)
(?:\?|\#!?) # the params delimiter ? or # or #!
@@ -1118,6 +1119,22 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'skip_download': True,
},
},
+ {
+ # YT 'Shorts'
+ 'url': 'https://youtube.com/shorts/4L2J27mJ3Dc',
+ 'info_dict': {
+ 'id': '4L2J27mJ3Dc',
+ 'ext': 'mp4',
+ 'upload_date': '20211025',
+ 'uploader': 'Charlie Berens',
+ 'description': 'md5:976512b8a29269b93bbd8a61edc45a6d',
+ 'uploader_id': 'fivedlrmilkshake',
+ 'title': 'Midwest Squid Game #Shorts',
+ },
+ 'params': {
+ 'skip_download': True,
+ },
+ },
]
_formats = {
'5': {'ext': 'flv', 'width': 400, 'height': 240, 'acodec': 'mp3', 'abr': 64, 'vcodec': 'h263'},
| Problem with a YT short site
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Read bugs section in FAQ: http://yt-dl.org/reporting
- Finally, put x into all relevant boxes (like this [x])
-->
- [x ] I'm reporting a broken site support issue
- [ x] I've verified that I'm running youtube-dl version **2021.12.17**
- [ x] I've checked that all provided URLs are alive and playable in a browser
- [ x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x ] I've searched the bugtracker for similar bug reports including closed ones
- [ x] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2021.12.17
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
$ ./youtube-dl.exe --verbose https://youtube.com/shorts/aCYFDPdNM50
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--verbose', 'https://youtube.com/shorts/aCYFDPdNM50']
[debug] Encodings: locale cp1252, fs mbcs, out cp1252, pref cp1252
[debug] youtube-dl version 2021.12.17
[debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1
[debug] exe versions: none
[debug] Proxy map: {}
[youtube:tab] shorts: Downloading webpage
ERROR: Unable to recognize tab page; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\YoutubeDL.py", line 815, in wrapper
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\YoutubeDL.py", line 836, in __extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\extractor\common.py", line 534, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\extractor\youtube.py", line 2862, in _real_extract
youtube_dl.utils.ExtractorError: Unable to recognize tab page; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
A YT Short video site is not recognized.
| 2022-01-31T00:10:16Z | [] | [] |
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\YoutubeDL.py", line 815, in wrapper
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\YoutubeDL.py", line 836, in __extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\extractor\common.py", line 534, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\extractor\youtube.py", line 2862, in _real_extract
youtube_dl.utils.ExtractorError: Unable to recognize tab page; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,868 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-30596 | 0c0876f790c78c38ececbc920073e8b6cf01e9c7 | diff --git a/youtube_dl/extractor/viki.py b/youtube_dl/extractor/viki.py
--- a/youtube_dl/extractor/viki.py
+++ b/youtube_dl/extractor/viki.py
@@ -1,38 +1,29 @@
# coding: utf-8
from __future__ import unicode_literals
-import base64
import hashlib
import hmac
-import itertools
import json
-import re
import time
from .common import InfoExtractor
-from ..compat import (
- compat_parse_qs,
- compat_urllib_parse_urlparse,
-)
from ..utils import (
ExtractorError,
int_or_none,
parse_age_limit,
parse_iso8601,
- sanitized_Request,
- std_headers,
try_get,
)
class VikiBaseIE(InfoExtractor):
_VALID_URL_BASE = r'https?://(?:www\.)?viki\.(?:com|net|mx|jp|fr)/'
- _API_QUERY_TEMPLATE = '/v4/%sapp=%s&t=%s&site=www.viki.com'
- _API_URL_TEMPLATE = 'https://api.viki.io%s&sig=%s'
+ _API_URL_TEMPLATE = 'https://api.viki.io%s'
+ _DEVICE_ID = '112395910d'
_APP = '100005a'
- _APP_VERSION = '6.0.0'
- _APP_SECRET = 'MM_d*yP@`&1@]@!AVrXf_o-HVEnoTnm$O-ti4[G~$JDI/Dc-&piU&z&5.;:}95=Iad'
+ _APP_VERSION = '6.11.3'
+ _APP_SECRET = 'd96704b180208dbb2efa30fe44c48bd8690441af9f567ba8fd710a72badc85198f7472'
_GEO_BYPASS = False
_NETRC_MACHINE = 'viki'
@@ -45,43 +36,60 @@ class VikiBaseIE(InfoExtractor):
'paywall': 'Sorry, this content is only available to Viki Pass Plus subscribers',
}
- def _prepare_call(self, path, timestamp=None, post_data=None):
+ def _stream_headers(self, timestamp, sig):
+ return {
+ 'X-Viki-manufacturer': 'vivo',
+ 'X-Viki-device-model': 'vivo 1606',
+ 'X-Viki-device-os-ver': '6.0.1',
+ 'X-Viki-connection-type': 'WIFI',
+ 'X-Viki-carrier': '',
+ 'X-Viki-as-id': '100005a-1625321982-3932',
+ 'timestamp': str(timestamp),
+ 'signature': str(sig),
+ 'x-viki-app-ver': self._APP_VERSION
+ }
+
+ def _api_query(self, path, version=4, **kwargs):
path += '?' if '?' not in path else '&'
- if not timestamp:
- timestamp = int(time.time())
- query = self._API_QUERY_TEMPLATE % (path, self._APP, timestamp)
+ app = self._APP
+ query = '/v{version}/{path}app={app}'.format(**locals())
if self._token:
query += '&token=%s' % self._token
+ return query + ''.join('&{name}={val}.format(**locals())' for name, val in kwargs.items())
+
+ def _sign_query(self, path):
+ timestamp = int(time.time())
+ query = self._api_query(path, version=5)
sig = hmac.new(
self._APP_SECRET.encode('ascii'),
- query.encode('ascii'),
- hashlib.sha1
- ).hexdigest()
- url = self._API_URL_TEMPLATE % (query, sig)
- return sanitized_Request(
- url, json.dumps(post_data).encode('utf-8')) if post_data else url
-
- def _call_api(self, path, video_id, note, timestamp=None, post_data=None):
+ '{query}&t={timestamp}'.format(**locals()).encode('ascii'),
+ hashlib.sha1).hexdigest()
+ return timestamp, sig, self._API_URL_TEMPLATE % query
+
+ def _call_api(
+ self, path, video_id, note='Downloading JSON metadata', data=None, query=None, fatal=True):
+ if query is None:
+ timestamp, sig, url = self._sign_query(path)
+ else:
+ url = self._API_URL_TEMPLATE % self._api_query(path, version=4)
resp = self._download_json(
- self._prepare_call(path, timestamp, post_data), video_id, note,
- headers={'x-viki-app-ver': self._APP_VERSION})
-
- error = resp.get('error')
- if error:
- if error == 'invalid timestamp':
- resp = self._download_json(
- self._prepare_call(path, int(resp['current_timestamp']), post_data),
- video_id, '%s (retry)' % note)
- error = resp.get('error')
- if error:
- self._raise_error(resp['error'])
+ url, video_id, note, fatal=fatal, query=query,
+ data=json.dumps(data).encode('utf-8') if data else None,
+ headers=({'x-viki-app-ver': self._APP_VERSION} if data
+ else self._stream_headers(timestamp, sig) if query is None
+ else None), expected_status=400) or {}
+ self._raise_error(resp.get('error'), fatal)
return resp
- def _raise_error(self, error):
- raise ExtractorError(
- '%s returned error: %s' % (self.IE_NAME, error),
- expected=True)
+ def _raise_error(self, error, fatal=True):
+ if error is None:
+ return
+ msg = '%s said: %s' % (self.IE_NAME, error)
+ if fatal:
+ raise ExtractorError(msg, expected=True)
+ else:
+ self.report_warning(msg)
def _check_errors(self, data):
for reason, status in (data.get('blocking') or {}).items():
@@ -90,9 +98,10 @@ def _check_errors(self, data):
if reason == 'geo':
self.raise_geo_restricted(msg=message)
elif reason == 'paywall':
+ if try_get(data, lambda x: x['paywallable']['tvod']):
+ self._raise_error('This video is for rent only or TVOD (Transactional Video On demand)')
self.raise_login_required(message)
- raise ExtractorError('%s said: %s' % (
- self.IE_NAME, message), expected=True)
+ self._raise_error(message)
def _real_initialize(self):
self._login()
@@ -102,35 +111,39 @@ def _login(self):
if username is None:
return
- login_form = {
- 'login_id': username,
- 'password': password,
- }
-
- login = self._call_api(
- 'sessions.json', None,
- 'Logging in', post_data=login_form)
-
- self._token = login.get('token')
+ self._token = self._call_api(
+ 'sessions.json', None, 'Logging in', fatal=False,
+ data={'username': username, 'password': password}).get('token')
if not self._token:
- self.report_warning('Unable to get session token, login has probably failed')
+ self.report_warning('Login Failed: Unable to get session token')
@staticmethod
- def dict_selection(dict_obj, preferred_key, allow_fallback=True):
+ def dict_selection(dict_obj, preferred_key):
if preferred_key in dict_obj:
- return dict_obj.get(preferred_key)
-
- if not allow_fallback:
- return
-
- filtered_dict = list(filter(None, [dict_obj.get(k) for k in dict_obj.keys()]))
- return filtered_dict[0] if filtered_dict else None
+ return dict_obj[preferred_key]
+ return (list(filter(None, dict_obj.values())) or [None])[0]
class VikiIE(VikiBaseIE):
IE_NAME = 'viki'
_VALID_URL = r'%s(?:videos|player)/(?P<id>[0-9]+v)' % VikiBaseIE._VALID_URL_BASE
_TESTS = [{
+ 'note': 'Free non-DRM video with storyboards in MPD',
+ 'url': 'https://www.viki.com/videos/1175236v-choosing-spouse-by-lottery-episode-1',
+ 'info_dict': {
+ 'id': '1175236v',
+ 'ext': 'mp4',
+ 'title': 'Choosing Spouse by Lottery - Episode 1',
+ 'timestamp': 1606463239,
+ 'age_limit': 12,
+ 'uploader': 'FCC',
+ 'upload_date': '20201127',
+ },
+ 'expected_warnings': ['Unknown MIME type image/jpeg in DASH manifest'],
+ 'params': {
+ 'format': 'bestvideo',
+ },
+ }, {
'url': 'http://www.viki.com/videos/1023585v-heirs-episode-14',
'info_dict': {
'id': '1023585v',
@@ -146,7 +159,7 @@ class VikiIE(VikiBaseIE):
'params': {
'format': 'bestvideo',
},
- 'skip': 'Blocked in the US',
+ 'skip': 'Content is only available to Viki Pass Plus subscribers',
'expected_warnings': ['Unknown MIME type image/jpeg in DASH manifest'],
}, {
# clip
@@ -178,11 +191,11 @@ class VikiIE(VikiBaseIE):
'like_count': int,
'age_limit': 13,
},
- 'skip': 'Blocked in the US',
+ 'skip': 'Page not found!',
}, {
# episode
'url': 'http://www.viki.com/videos/44699v-boys-over-flowers-episode-1',
- 'md5': '0a53dc252e6e690feccd756861495a8c',
+ 'md5': '670440c79f7109ca6564d4c7f24e3e81',
'info_dict': {
'id': '44699v',
'ext': 'mp4',
@@ -193,7 +206,7 @@ class VikiIE(VikiBaseIE):
'upload_date': '20100405',
'uploader': 'group8',
'like_count': int,
- 'age_limit': 13,
+ 'age_limit': 15,
'episode_number': 1,
},
'params': {
@@ -224,7 +237,7 @@ class VikiIE(VikiBaseIE):
}, {
# non-English description
'url': 'http://www.viki.com/videos/158036v-love-in-magic',
- 'md5': '41faaba0de90483fb4848952af7c7d0d',
+ 'md5': '78bf49fdaa51f9e7f9150262a9ef9bdf',
'info_dict': {
'id': '158036v',
'ext': 'mp4',
@@ -232,8 +245,8 @@ class VikiIE(VikiBaseIE):
'upload_date': '20111122',
'timestamp': 1321985454,
'description': 'md5:44b1e46619df3a072294645c770cef36',
- 'title': 'Love In Magic',
- 'age_limit': 13,
+ 'title': 'Love in Magic',
+ 'age_limit': 15,
},
'params': {
'format': 'bestvideo',
@@ -244,45 +257,53 @@ class VikiIE(VikiBaseIE):
def _real_extract(self, url):
video_id = self._match_id(url)
- resp = self._download_json(
- 'https://www.viki.com/api/videos/' + video_id,
- video_id, 'Downloading video JSON', headers={
- 'x-client-user-agent': std_headers['User-Agent'],
- 'x-viki-app-ver': '3.0.0',
- })
- video = resp['video']
+ video = self._call_api('videos/{0}.json'.format(video_id), video_id, 'Downloading video JSON', query={})
self._check_errors(video)
- title = self.dict_selection(video.get('titles', {}), 'en', allow_fallback=False)
+ title = try_get(video, lambda x: x['titles']['en'], str)
episode_number = int_or_none(video.get('number'))
if not title:
title = 'Episode %d' % episode_number if video.get('type') == 'episode' else video.get('id') or video_id
container_titles = try_get(video, lambda x: x['container']['titles'], dict) or {}
container_title = self.dict_selection(container_titles, 'en')
- title = '%s - %s' % (container_title, title)
+ if container_title and title == video_id:
+ title = container_title
+ else:
+ title = '%s - %s' % (container_title, title)
+
+ resp = self._call_api(
+ 'playback_streams/%s.json?drms=dt3&device_id=%s' % (video_id, self._DEVICE_ID),
+ video_id, 'Downloading video streams JSON')['main'][0]
+
+ mpd_url = resp['url']
+ # 720p is hidden in another MPD which can be found in the current manifest content
+ mpd_content = self._download_webpage(mpd_url, video_id, note='Downloading initial MPD manifest')
+ mpd_url = self._search_regex(
+ r'(?mi)<BaseURL>(http.+.mpd)', mpd_content, 'new manifest', default=mpd_url)
+ if 'mpdhd_high' not in mpd_url:
+ # Modify the URL to get 1080p
+ mpd_url = mpd_url.replace('mpdhd', 'mpdhd_high')
+ formats = self._extract_mpd_formats(mpd_url, video_id)
+ self._sort_formats(formats)
description = self.dict_selection(video.get('descriptions', {}), 'en')
-
+ thumbnails = [{
+ 'id': thumbnail_id,
+ 'url': thumbnail['url'],
+ } for thumbnail_id, thumbnail in (video.get('images') or {}).items() if thumbnail.get('url')]
like_count = int_or_none(try_get(video, lambda x: x['likes']['count']))
- thumbnails = []
- for thumbnail_id, thumbnail in (video.get('images') or {}).items():
- thumbnails.append({
- 'id': thumbnail_id,
- 'url': thumbnail.get('url'),
- })
-
- subtitles = {}
- for subtitle_lang, _ in (video.get('subtitle_completions') or {}).items():
- subtitles[subtitle_lang] = [{
- 'ext': subtitles_format,
- 'url': self._prepare_call(
- 'videos/%s/subtitles/%s.%s' % (video_id, subtitle_lang, subtitles_format)),
- } for subtitles_format in ('srt', 'vtt')]
-
- result = {
+ stream_id = try_get(resp, lambda x: x['properties']['track']['stream_id'])
+ subtitles = dict((lang, [{
+ 'ext': ext,
+ 'url': self._API_URL_TEMPLATE % self._api_query(
+ 'videos/{0}/auth_subtitles/{1}.{2}'.format(video_id, lang, ext), stream_id=stream_id)
+ } for ext in ('srt', 'vtt')]) for lang in (video.get('subtitle_completions') or {}).keys())
+
+ return {
'id': video_id,
+ 'formats': formats,
'title': title,
'description': description,
'duration': int_or_none(video.get('duration')),
@@ -296,79 +317,6 @@ def _real_extract(self, url):
'episode_number': episode_number,
}
- formats = []
-
- def add_format(format_id, format_dict, protocol='http'):
- # rtmps URLs does not seem to work
- if protocol == 'rtmps':
- return
- format_url = format_dict.get('url')
- if not format_url:
- return
- qs = compat_parse_qs(compat_urllib_parse_urlparse(format_url).query)
- stream = qs.get('stream', [None])[0]
- if stream:
- format_url = base64.b64decode(stream).decode()
- if format_id in ('m3u8', 'hls'):
- m3u8_formats = self._extract_m3u8_formats(
- format_url, video_id, 'mp4',
- entry_protocol='m3u8_native',
- m3u8_id='m3u8-%s' % protocol, fatal=False)
- # Despite CODECS metadata in m3u8 all video-only formats
- # are actually video+audio
- for f in m3u8_formats:
- if '_drm/index_' in f['url']:
- continue
- if f.get('acodec') == 'none' and f.get('vcodec') != 'none':
- f['acodec'] = None
- formats.append(f)
- elif format_id in ('mpd', 'dash'):
- formats.extend(self._extract_mpd_formats(
- format_url, video_id, 'mpd-%s' % protocol, fatal=False))
- elif format_url.startswith('rtmp'):
- mobj = re.search(
- r'^(?P<url>rtmp://[^/]+/(?P<app>.+?))/(?P<playpath>mp4:.+)$',
- format_url)
- if not mobj:
- return
- formats.append({
- 'format_id': 'rtmp-%s' % format_id,
- 'ext': 'flv',
- 'url': mobj.group('url'),
- 'play_path': mobj.group('playpath'),
- 'app': mobj.group('app'),
- 'page_url': url,
- })
- else:
- formats.append({
- 'url': format_url,
- 'format_id': '%s-%s' % (format_id, protocol),
- 'height': int_or_none(self._search_regex(
- r'^(\d+)[pP]$', format_id, 'height', default=None)),
- })
-
- for format_id, format_dict in (resp.get('streams') or {}).items():
- add_format(format_id, format_dict)
- if not formats:
- streams = self._call_api(
- 'videos/%s/streams.json' % video_id, video_id,
- 'Downloading video streams JSON')
-
- if 'external' in streams:
- result.update({
- '_type': 'url_transparent',
- 'url': streams['external']['url'],
- })
- return result
-
- for format_id, stream_dict in streams.items():
- for protocol, format_dict in stream_dict.items():
- add_format(format_id, format_dict, protocol)
- self._sort_formats(formats)
-
- result['formats'] = formats
- return result
-
class VikiChannelIE(VikiBaseIE):
IE_NAME = 'viki:channel'
@@ -378,9 +326,9 @@ class VikiChannelIE(VikiBaseIE):
'info_dict': {
'id': '50c',
'title': 'Boys Over Flowers',
- 'description': 'md5:804ce6e7837e1fd527ad2f25420f4d59',
+ 'description': 'md5:f08b679c200e1a273c695fe9986f21d7',
},
- 'playlist_mincount': 71,
+ 'playlist_mincount': 51,
}, {
'url': 'http://www.viki.com/tv/1354c-poor-nastya-complete',
'info_dict': {
@@ -401,33 +349,38 @@ class VikiChannelIE(VikiBaseIE):
'only_matching': True,
}]
- _PER_PAGE = 25
+ _video_types = ('episodes', 'movies', 'clips', 'trailers')
+
+ def _entries(self, channel_id):
+ params = {
+ 'app': self._APP, 'token': self._token, 'only_ids': 'true',
+ 'direction': 'asc', 'sort': 'number', 'per_page': 30
+ }
+ video_types = self._video_types
+ for video_type in video_types:
+ if video_type not in self._video_types:
+ self.report_warning('Unknown video_type: ' + video_type)
+ page_num = 0
+ while True:
+ page_num += 1
+ params['page'] = page_num
+ res = self._call_api(
+ 'containers/{channel_id}/{video_type}.json'.format(**locals()), channel_id, query=params, fatal=False,
+ note='Downloading %s JSON page %d' % (video_type.title(), page_num))
+
+ for video_id in res.get('response') or []:
+ yield self.url_result('https://www.viki.com/videos/' + video_id, VikiIE.ie_key(), video_id)
+ if not res.get('more'):
+ break
def _real_extract(self, url):
channel_id = self._match_id(url)
- channel = self._call_api(
- 'containers/%s.json' % channel_id, channel_id,
- 'Downloading channel JSON')
+ channel = self._call_api('containers/%s.json' % channel_id, channel_id, 'Downloading channel JSON')
self._check_errors(channel)
- title = self.dict_selection(channel['titles'], 'en')
-
- description = self.dict_selection(channel['descriptions'], 'en')
-
- entries = []
- for video_type in ('episodes', 'clips', 'movies'):
- for page_num in itertools.count(1):
- page = self._call_api(
- 'containers/%s/%s.json?per_page=%d&sort=number&direction=asc&with_paging=true&page=%d'
- % (channel_id, video_type, self._PER_PAGE, page_num), channel_id,
- 'Downloading %s JSON page #%d' % (video_type, page_num))
- for video in page['response']:
- video_id = video['id']
- entries.append(self.url_result(
- 'https://www.viki.com/videos/%s' % video_id, 'Viki'))
- if not page['pagination']['next']:
- break
-
- return self.playlist_result(entries, channel_id, title, description)
+ return self.playlist_result(
+ self._entries(channel_id), channel_id,
+ self.dict_selection(channel['titles'], 'en'),
+ self.dict_selection(channel['descriptions'], 'en'))
| Viki.com not working
Youtube-DL (and -DLP) used to work for Viki.com until a few days ago.
Verbose log from Youtube-DL:
[debug] System config: [][debug] User config: [][debug] Custom config: [][debug] Command-line args: ['--verbose', '--cookies', 'viki.txt', '--username', 'PRIVATE', '--password', 'PRIVATE', 'https://www.viki.com/videos/1127339v'][debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252[debug] youtube-dl version 2021.12.17[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041[debug] exe versions: ffmpeg 2022-01-13-git-c936c319bd-essentials_build-www.gyan.dev[debug] Proxy map: {}[viki] Logging inERROR: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\extractor\common.py", line 634, in _request_webpageFile "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpupik7c6w\build\youtube_dl\YoutubeDL.py", line 2288, in urlopenFile "C:\Python\Python34\lib\urllib\request.py", line 470, in openFile "C:\Python\Python34\lib\urllib\request.py", line 580, in http_responseFile "C:\Python\Python34\lib\urllib\request.py", line 508, in errorFile "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chainFile "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default
Verbose log from Youtube-DLP:
[debug] Command-line config: ['--verbose', '--cookies', 'viki.txt', '--username', 'PRIVATE', '--password', 'PRIVATE', 'https://www.viki.com/videos/1127339v']
[debug] Encodings: locale cp1252, fs utf-8, out utf-8, err utf-8, pref cp1252
[debug] yt-dlp version 2022.01.21 [f20d607] (win_exe)
[debug] Python version 3.8.10 (CPython 64bit) - Windows-10-10.0.19042-SP0
[debug] exe versions: ffmpeg 2022-01-13-git-c936c319bd-essentials_build-www.gyan.dev (setts)
[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets
[debug] Proxy map: {}
[viki] Logging in
WARNING: [viki] Unable to download JSON metadata: HTTP Error 404: Not Found
WARNING: [viki] Login Failed: Unable to get session token
[debug] [viki] Extracting URL: https://www.viki.com/videos/1127339v
[viki] 1127339v: Downloading video JSON
[viki] 1127339v: Downloading video streams JSON
ERROR: [viki] 1127339v: viki said: Bad request
File "yt_dlp\extractor\common.py", line 612, in extract
File "yt_dlp\extractor\viki.py", line 255, in _real_extract
File "yt_dlp\extractor\viki.py", line 78, in _call_api
File "yt_dlp\extractor\viki.py", line 86, in _raise_error
Viki giving HTTP Error 422 for few videos in between
```
.....
[download] Attention, Love! - Episode 3-1120166v.mp4 has already been downloaded and merged
[download] Downloading video 4 of 17
[viki] 1120167v: Downloading video JSON
ERROR: Unable to download JSON metadata: HTTP Error 422: Unprocessable Entity (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
I get the above error in the middle with the command `youtube-dl --sub-lang en --sub-format vtt --write-sub https://www.viki.com/tv/35551c-attention-love`.
Adding `-i` to above(to ignore errors) downloads that video number without issues.
Only that video(Episode 4) had the issue so far but considering that the download worked for it with by just ignoring errors could mean there is some thing that can be fixed.
Version: (Installed from git release few hours ago)
```
:~$ youtube-dl -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2021.06.06
[debug] Python version 2.7.18 (CPython) - Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-Kali-kali-rolling-kali-rolling
[debug] exe versions: ffmpeg 4.3.1-5, ffprobe 4.3.1-5, rtmpdump 2.4
[debug] Proxy map: {}
```
[Broken] Viki
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.06.06. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.06.06**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2021.06.06
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
[debug] Custom config: []
[debug] Command-line args: ['--verbose', '--cookies', 'cookies.txt', '--username
', 'PRIVATE', '--password', 'PRIVATE', '-F', 'https://www.viki.com/movies/37182c
-man-of-men']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2021.06.06
[debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg 4.3.1, ffprobe 4.3.1
[debug] Proxy map: {}
[viki:channel] Logging in
[viki:channel] 37182c: Downloading channel JSON
[viki:channel] 37182c: Downloading episodes JSON page #1
[viki:channel] 37182c: Downloading clips JSON page #1
[viki:channel] 37182c: Downloading movies JSON page #1
[download] Downloading playlist: Man of Men
[viki:channel] playlist Man of Men: Collected 1 video ids (downloading 1 of them
)
[download] Downloading video 1 of 1
[viki] Logging in
[viki] 1172967v: Downloading video JSON
ERROR: Sorry, this content is only available to Viki Pass Plus subscribers. Use
--username and --password or --netrc to provide account credentials.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl
31\build\youtube_dl\YoutubeDL.py", line 815, in wrapper
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl
31\build\youtube_dl\YoutubeDL.py", line 836, in __extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl
31\build\youtube_dl\extractor\common.py", line 534, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl
31\build\youtube_dl\extractor\viki.py", line 255, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl
31\build\youtube_dl\extractor\viki.py", line 93, in _check_errors
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl
31\build\youtube_dl\extractor\common.py", line 943, in raise_login_required
youtube_dl.utils.ExtractorError: Sorry, this content is only available to Viki P
ass Plus subscribers. Use --username and --password or --netrc to provide accoun
t credentials.
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Hello.
I have a subscription to this site. This movie plays successfully in the browser.
| This is already fixed in yt-dlp 2022.02.03 (yt-dlp/yt-dlp#2540). The code here is a bit different so the rest of the changes would probably have to be backported as well to fix it.
Same for me on macos (actually I cannot download any video anymore). After investigating, the json content of the 422 error is the following :
```
{"error":{"status":422,"body":{"vcode":6422,"error":"Unprocessable entity","details":"invalid app ver"}}}
```
Looks like things have changed on viki's server side and this part is impacted :
https://github.com/ytdl-org/youtube-dl/blob/c2350cac243ba1ec1586fe85b0d62d1b700047a2/youtube_dl/extractor/viki.py#L247-L252
Replacing the version by '4.0.74' (latest observed on the website) fixes the 422 error, but the download fails afterwards when retrieving the streams json with :
```
ERROR: Unable to download JSON metadata: HTTP Error 400: Bad Request (caused by <HTTPError 400: 'Bad Request'>)
Detail :
{"error":"invalid signature","vcode":8003}
````
I'm stuck on this and we probably need help from someone who knows better the viki extractor :)
| 2022-02-04T12:40:24Z | [] | [] |
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl
31\build\youtube_dl\YoutubeDL.py", line 815, in wrapper
| 18,870 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-30676 | 34722270741fb9c06f978861c1e5f503291070d8 | diff --git a/youtube_dl/extractor/myspass.py b/youtube_dl/extractor/myspass.py
--- a/youtube_dl/extractor/myspass.py
+++ b/youtube_dl/extractor/myspass.py
@@ -35,7 +35,9 @@ def _real_extract(self, url):
title = xpath_text(metadata, 'title', fatal=True)
video_url = xpath_text(metadata, 'url_flv', 'download url', True)
video_id_int = int(video_id)
- for group in re.search(r'/myspass2009/\d+/(\d+)/(\d+)/(\d+)/', video_url).groups():
+
+ grps = re.search(r'/myspass2009/\d+/(\d+)/(\d+)/(\d+)/', video_url)
+ for group in grps.groups() if grps else []:
group_int = int(group)
if group_int > video_id_int:
video_url = video_url.replace(
| myspass.de broken
## Checklist
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.12.17**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
```
$ youtube-dl -v https://www.myspass.de/shows/tvshows/tv-total/Novak-Puffovic-bei-bester-Laune--/44996/
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://www.myspass.de/shows/tvshows/tv-total/Novak-Puffovic-bei-bester-Laune--/44996/']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Python version 3.8.10 (CPython) - Linux-5.4.0-73-generic-x86_64-with-glibc2.29
[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4, rtmpdump 2.4
[debug] Proxy map: {}
[MySpass] 44996: Downloading XML
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/bin/youtube-dl/__main__.py", line 19, in <module>
File "/usr/local/bin/youtube-dl/youtube_dl/__init__.py", line 475, in main
File "/usr/local/bin/youtube-dl/youtube_dl/__init__.py", line 465, in _real_main
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2068, in download
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 808, in extract_info
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 815, in wrapper
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 836, in __extract_info
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 534, in extract
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/myspass.py", line 38, in _real_extract
AttributeError: 'NoneType' object has no attribute 'groups'
```
## Description
download of myspass.de doesn't work anymore.
Fixed groups() call on potentially empty regex search object.
- https://github.com/ytdl-org/youtube-dl/issues/30521
## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Read [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site)
- [x] Read [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) and adjusted the code to meet them
- [x] Covered the code with tests (note that PRs without tests will be REJECTED)
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Bug fix
- [ ] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
Within the myspass site extractor, there is a short regex for checking for certain substrings in the video url.
```groups()``` was called on ```re.search()``` without first ensuring there were any results to return a regex object. I've added a simple check for this.
| 2022-02-24T13:24:09Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/bin/youtube-dl/__main__.py", line 19, in <module>
File "/usr/local/bin/youtube-dl/youtube_dl/__init__.py", line 475, in main
File "/usr/local/bin/youtube-dl/youtube_dl/__init__.py", line 465, in _real_main
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2068, in download
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 808, in extract_info
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 815, in wrapper
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 836, in __extract_info
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 534, in extract
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/myspass.py", line 38, in _real_extract
AttributeError: 'NoneType' object has no attribute 'groups'
| 18,876 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-3089 | 2371053565787dc833b04a6d8a45730d61ae7074 | diff --git a/youtube_dl/extractor/ard.py b/youtube_dl/extractor/ard.py
--- a/youtube_dl/extractor/ard.py
+++ b/youtube_dl/extractor/ard.py
@@ -56,7 +56,18 @@ def _real_extract(self, url):
raise ExtractorError('This video is only available after 20:00')
formats = []
+
for s in streams:
+ if type(s['_stream']) == list:
+ for index, url in enumerate(s['_stream'][::-1]):
+ quality = s['_quality'] + index
+ formats.append({
+ 'quality': quality,
+ 'url': url,
+ 'format_id': '%s-%s' % (determine_ext(url), quality)
+ })
+ continue
+
format = {
'quality': s['_quality'],
'url': s['_stream'],
| AttributeError in ard module
With the command given below, I get the error message below. I'm using version 2014.06.09.
`youtube-dl http://www.ardmediathek.de/tv/Klassiker-der-Weltliteratur/Max-Frisch/BR-alpha/Video\?documentId\=19067308\&bcastId\=14913194`
```
[ARD] 19067308: Downloading webpage
[ARD] 19067308: Downloading JSON metadata
Traceback (most recent call last):
File "/usr/bin/youtube-dl", line 9, in <module>
load_entry_point('youtube-dl==2014.06.09', 'console_scripts', 'youtube-dl')()
File "/usr/lib/python3.4/site-packages/youtube_dl/__init__.py", line 853, in main
_real_main(argv)
File "/usr/lib/python3.4/site-packages/youtube_dl/__init__.py", line 843, in _real_main
retcode = ydl.download(all_urls)
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 1050, in download
self.extract_info(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 516, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/common.py", line 168, in extract
return self._real_extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/ard.py", line 66, in _real_extract
determine_ext(format['url']), format['quality'])
File "/usr/lib/python3.4/site-packages/youtube_dl/utils.py", line 845, in determine_ext
guess = url.partition(u'?')[0].rpartition(u'.')[2]
AttributeError: 'list' object has no attribute 'partition'
```
| 2014-06-16T14:19:24Z | [] | [] |
Traceback (most recent call last):
File "/usr/bin/youtube-dl", line 9, in <module>
load_entry_point('youtube-dl==2014.06.09', 'console_scripts', 'youtube-dl')()
File "/usr/lib/python3.4/site-packages/youtube_dl/__init__.py", line 853, in main
_real_main(argv)
File "/usr/lib/python3.4/site-packages/youtube_dl/__init__.py", line 843, in _real_main
retcode = ydl.download(all_urls)
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 1050, in download
self.extract_info(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 516, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/common.py", line 168, in extract
return self._real_extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/ard.py", line 66, in _real_extract
determine_ext(format['url']), format['quality'])
File "/usr/lib/python3.4/site-packages/youtube_dl/utils.py", line 845, in determine_ext
guess = url.partition(u'?')[0].rpartition(u'.')[2]
AttributeError: 'list' object has no attribute 'partition'
| 18,879 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-31181 | b0a60ce2032172aeaaf27fe3866ab72768f10cb2 | diff --git a/youtube_dl/extractor/infoq.py b/youtube_dl/extractor/infoq.py
--- a/youtube_dl/extractor/infoq.py
+++ b/youtube_dl/extractor/infoq.py
@@ -1,6 +1,9 @@
# coding: utf-8
from __future__ import unicode_literals
+from ..utils import (
+ ExtractorError,
+)
from ..compat import (
compat_b64decode,
@@ -90,7 +93,11 @@ def _extract_http_video(self, webpage):
}]
def _extract_http_audio(self, webpage, video_id):
- fields = self._form_hidden_inputs('mp3Form', webpage)
+ try:
+ fields = self._form_hidden_inputs('mp3Form', webpage)
+ except ExtractorError:
+ fields = {}
+
http_audio_url = fields.get('filename')
if not http_audio_url:
return []
| Unable to extract mp3Form form for InfoQ video
## Checklist
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.12.17**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
```
$ youtube-dl --verbose 'https://www.infoq.com/presentations/problems-async-arch/'
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--verbose', 'https://www.infoq.com/presentations/problems-async-arch/']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Python version 3.10.5 (CPython) - macOS-12.5-x86_64-i386-64bit
[debug] exe versions: ffmpeg 5.1, ffprobe 5.1
[debug] Proxy map: {}
[InfoQ] problems-async-arch: Downloading webpage
ERROR: Unable to extract mp3Form form; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/YoutubeDL.py", line 815, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/YoutubeDL.py", line 836, in __extract_info
ie_result = ie.extract(url)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 534, in extract
ie_result = self._real_extract(url)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/infoq.py", line 128, in _real_extract
+ self._extract_http_audio(webpage, video_id))
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/infoq.py", line 93, in _extract_http_audio
fields = self._form_hidden_inputs('mp3Form', webpage)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 1367, in _form_hidden_inputs
form = self._search_regex(
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 1012, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
youtube_dl.utils.RegexNotFoundError: Unable to extract mp3Form form; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
## Description
I can watch the video in a browser when I'm logged in or not. When downloading the video with youtube-dl, I get the error, "Unable to extract mp3Form form".
| The page doesn't have the `<form...>` with `id="mp3Form"` that the extractor expects. This shouldn't be a crashing error. The [`http_video` format](https://videoh.infoq.com/presentations/21-nov-unblockeddesign.mp4?Signature=a83fgFGAqJAFhPYYUvVs72T4J5jxajpL1LKMqSNDRCzqkUWJBelQwyZhIIjBNTTyXb27NkpI5kyJD3iw5LnfSTtDU9Mn8zj%7ES0-j7TJFeS7ojSNqYPb5v4vq0BVd534MuKEoS0KKMJuJhzG9BDMK305YsKmAs8-njJ--S9R5qJk_&Policy=eyJTdGF0ZW1lbnQiOiBbeyJSZXNvdXJjZSI6IioiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2NTk0ODM5Mjd9LCJJcEFkZHJlc3MiOnsiQVdTOlNvdXJjZUlwIjoiMC4wLjAuMC8wIn19fV19&Key-Pair-Id=APKAIMZVI7QH4C5YKH6Q) has audio.
I was able to work around this problem by modifying the `_real_extract()` method in `youtube_dl/extractor/infoq.py` and commenting out the `self._extract_http_audio(webpage, video_id)` call in line 128.
That would work in this case.
More generally, let's assume that the targeted form is sometimes present. We can either
* trap the failing code inside `_extract_http_audio()` with a `try:`, or
* extend the `__form_hidden_inputs()` method signature with `**kwargs` passed into its `_search_regex()` call so that `fatal=False` can be passed down.
The first option occurred to me to minimize the number of things touched (infoq.py vs infoq.py and common.py) and just deal with a smaller blast radius.
The second option would better if other extractors might need to deal with a similar situation. This would expose more of the facilities that `_search_regex()` provides.
Thanks for looking into this!
I tried extending the `_form_hidden_inputs()` method to pass `fatal=False` and it does download the video from InfoQ but with `WARNING: unable to extract mp3Form form`.
```
$ youtube-dl -v https://www.infoq.com/presentations/problems-async-arch/
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://www.infoq.com/presentations/problems-async-arch/']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Python version 3.10.5 (CPython) - macOS-12.5-x86_64-i386-64bit
[debug] exe versions: ffmpeg 5.1, ffprobe 5.1
[debug] Proxy map: {}
[InfoQ] problems-async-arch: Downloading webpage
WARNING: unable to extract mp3Form form; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'https://videoh.infoq.com/presentations/21-nov-unblockeddesign.mp4?Policy=eyJTdGF0ZW1lbnQiOiBbeyJSZXNvdXJjZSI6IioiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2NTk0OTQ0MDZ9LCJJcEFkZHJlc3MiOnsiQVdTOlNvdXJjZUlwIjoiMC4wLjAuMC8wIn19fV19&Signature=KUA9-Ak19p9ZPMBvwk1KTESuNeZXi~nLl8HSlcyOEUmRP6CRa~5LXQsBxOqSRgyHeKDMd9OFidkSyysMkwbav3msMuV6nfR8P1KdbUKSfX-c980~KvPQn51X15IxLQpYrPjfoU-TMiGp232JwL3i5vizxcX-8MN3KLuIHmqY1RQ_&Key-Pair-Id=APKAIMZVI7QH4C5YKH6Q'
[download] Destination: Unblocked by Design-problems-async-arch.mp4
[download] 1.8% of 217.90MiB at 7.89MiB/s ETA 00:27^C
ERROR: Interrupted by user
```
This is what I did:
```
$ diff -u extractor/infoq.py-orig extractor/infoq.py
--- extractor/infoq.py-orig 2021-12-16 09:02:01.000000000 -1000
+++ extractor/infoq.py 2022-08-02 14:31:51.000000000 -1000
@@ -90,7 +90,7 @@
}]
def _extract_http_audio(self, webpage, video_id):
- fields = self._form_hidden_inputs('mp3Form', webpage)
+ fields = self._form_hidden_inputs('mp3Form', webpage, fatal=False)
http_audio_url = fields.get('filename')
if not http_audio_url:
return []
$ diff -u extractor/common.py-orig extractor/common.py
--- extractor/common.py-orig 2021-12-16 09:02:01.000000000 -1000
+++ extractor/common.py 2022-08-02 14:33:30.000000000 -1000
@@ -1363,10 +1363,12 @@
hidden_inputs[name] = value
return hidden_inputs
- def _form_hidden_inputs(self, form_id, html):
+ def _form_hidden_inputs(self, form_id, html, fatal=True):
form = self._search_regex(
r'(?is)<form[^>]+?id=(["\'])%s\1[^>]*>(?P<form>.+?)</form>' % form_id,
- html, '%s form' % form_id, group='form')
+ html, '%s form' % form_id, group='form', fatal=fatal)
+ if not form:
+ form = ''
return self._hidden_inputs(form)
def _sort_formats(self, formats, field_preference=None):
```
The warning comes from the `_search_regex()` method in `common.py`. When the fatal parameter is passed as False and no match is found, things end up in `else` clause and the warning message is printed with the `bug_reports_message()` text. No exception is raised.
To have the warning message not printed when passing `fatal=False` to the `_form_hidden_inputs()` method, a check for no match object and not fatal is needed in the `else` clause of the `_search_regex()` method.
```
$ diff -u extractor/common.py-orig extractor/common.py
--- extractor/common.py-orig 2021-12-16 09:02:01.000000000 -1000
+++ extractor/common.py 2022-08-02 14:51:27.000000000 -1000
@@ -1011,6 +1011,8 @@
elif fatal:
raise RegexNotFoundError('Unable to extract %s' % _name)
else:
+ if not mobj and not fatal:
+ return None
self._downloader.report_warning('unable to extract %s' % _name + bug_reports_message())
return None
@@ -1363,10 +1365,12 @@
hidden_inputs[name] = value
return hidden_inputs
- def _form_hidden_inputs(self, form_id, html):
+ def _form_hidden_inputs(self, form_id, html, fatal=True):
form = self._search_regex(
r'(?is)<form[^>]+?id=(["\'])%s\1[^>]*>(?P<form>.+?)</form>' % form_id,
- html, '%s form' % form_id, group='form')
+ html, '%s form' % form_id, group='form', fatal=fatal)
+ if not form:
+ form = ''
return self._hidden_inputs(form)
def _sort_formats(self, formats, field_preference=None):
```
You can use `default=None` (no warning log message) or `fatal=False` (warning message) depending on whether the missing form is expected or its absence indicates that more work is needed on the extractor. There isn't, though there could be, an `expected` parameter along with `fatal` that would suppress the `bug_reports_message()` in the warning when `True`, if the user should be able to see why no separate audio was extracted.
So I would (and did, just like your diff above) use `fatal=False` initially. Then, depending on tests:
* if the form is never there in current pages, I'd just remove the `_extract_http_audio()` method;
* if the form is still served in old pages but not in new ones, I'd switch to `default=None`;
Having a look at yt-dlp's extractor, the case is already handled and ignored there. Equivalent yt-dl code would be:
```diff
def _extract_http_audio(self, webpage, video_id):
+ try:
- fields = self._form_hidden_inputs('mp3Form', webpage)
+ fields = self._form_hidden_inputs('mp3Form', webpage)
+ except ExtractorError:
+ fields = {}
http_audio_url = fields.get('filename')
...
```
This code assumes that any kind of `ExtractorError` should be ignored here, not just `RegexNotFoundError`. For now I'll probably just push this for compatibility.
Awesome! Thanks, again! | 2022-08-18T18:27:18Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/YoutubeDL.py", line 815, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/YoutubeDL.py", line 836, in __extract_info
ie_result = ie.extract(url)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 534, in extract
ie_result = self._real_extract(url)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/infoq.py", line 128, in _real_extract
+ self._extract_http_audio(webpage, video_id))
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/infoq.py", line 93, in _extract_http_audio
fields = self._form_hidden_inputs('mp3Form', webpage)
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 1367, in _form_hidden_inputs
form = self._search_regex(
File "/usr/local/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 1012, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
youtube_dl.utils.RegexNotFoundError: Unable to extract mp3Form form; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,885 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-31235 | 7009bb9f3182449ae8cc05cc28b768b63030a485 | diff --git a/youtube_dl/aes.py b/youtube_dl/aes.py
--- a/youtube_dl/aes.py
+++ b/youtube_dl/aes.py
@@ -8,6 +8,18 @@
BLOCK_SIZE_BYTES = 16
+def pkcs7_padding(data):
+ """
+ PKCS#7 padding
+
+ @param {int[]} data cleartext
+ @returns {int[]} padding data
+ """
+
+ remaining_length = BLOCK_SIZE_BYTES - len(data) % BLOCK_SIZE_BYTES
+ return data + [remaining_length] * remaining_length
+
+
def aes_ctr_decrypt(data, key, counter):
"""
Decrypt with aes in counter mode
@@ -76,8 +88,7 @@ def aes_cbc_encrypt(data, key, iv):
previous_cipher_block = iv
for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
- remaining_length = BLOCK_SIZE_BYTES - len(block)
- block += [remaining_length] * remaining_length
+ block = pkcs7_padding(block)
mixed_block = xor(block, previous_cipher_block)
encrypted_block = aes_encrypt(mixed_block, expanded_key)
@@ -88,6 +99,28 @@ def aes_cbc_encrypt(data, key, iv):
return encrypted_data
+def aes_ecb_encrypt(data, key):
+ """
+ Encrypt with aes in ECB mode. Using PKCS#7 padding
+
+ @param {int[]} data cleartext
+ @param {int[]} key 16/24/32-Byte cipher key
+ @returns {int[]} encrypted data
+ """
+ expanded_key = key_expansion(key)
+ block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
+
+ encrypted_data = []
+ for i in range(block_count):
+ block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
+ block = pkcs7_padding(block)
+
+ encrypted_block = aes_encrypt(block, expanded_key)
+ encrypted_data += encrypted_block
+
+ return encrypted_data
+
+
def key_expansion(data):
"""
Generate key schedule
diff --git a/youtube_dl/extractor/neteasemusic.py b/youtube_dl/extractor/neteasemusic.py
--- a/youtube_dl/extractor/neteasemusic.py
+++ b/youtube_dl/extractor/neteasemusic.py
@@ -1,20 +1,31 @@
# coding: utf-8
from __future__ import unicode_literals
-from hashlib import md5
from base64 import b64encode
+from binascii import hexlify
from datetime import datetime
+from hashlib import md5
+from random import randint
+import json
import re
+import time
from .common import InfoExtractor
+from ..aes import aes_ecb_encrypt, pkcs7_padding
from ..compat import (
compat_urllib_parse_urlencode,
compat_str,
compat_itertools_count,
)
from ..utils import (
- sanitized_Request,
+ ExtractorError,
+ bytes_to_intlist,
float_or_none,
+ int_or_none,
+ intlist_to_bytes,
+ sanitized_Request,
+ std_headers,
+ try_get,
)
@@ -35,32 +46,85 @@ def _encrypt(cls, dfsid):
result = b64encode(m.digest()).decode('ascii')
return result.replace('/', '_').replace('+', '-')
+ @classmethod
+ def make_player_api_request_data_and_headers(cls, song_id, bitrate):
+ KEY = b'e82ckenh8dichen8'
+ URL = '/api/song/enhance/player/url'
+ now = int(time.time() * 1000)
+ rand = randint(0, 1000)
+ cookie = {
+ 'osver': None,
+ 'deviceId': None,
+ 'appver': '8.0.0',
+ 'versioncode': '140',
+ 'mobilename': None,
+ 'buildver': '1623435496',
+ 'resolution': '1920x1080',
+ '__csrf': '',
+ 'os': 'pc',
+ 'channel': None,
+ 'requestId': '{0}_{1:04}'.format(now, rand),
+ }
+ request_text = json.dumps(
+ {'ids': '[{0}]'.format(song_id), 'br': bitrate, 'header': cookie},
+ separators=(',', ':'))
+ message = 'nobody{0}use{1}md5forencrypt'.format(
+ URL, request_text).encode('latin1')
+ msg_digest = md5(message).hexdigest()
+
+ data = '{0}-36cd479b6b5-{1}-36cd479b6b5-{2}'.format(
+ URL, request_text, msg_digest)
+ data = pkcs7_padding(bytes_to_intlist(data))
+ encrypted = intlist_to_bytes(aes_ecb_encrypt(data, bytes_to_intlist(KEY)))
+ encrypted_params = hexlify(encrypted).decode('ascii').upper()
+
+ cookie = '; '.join(
+ ['{0}={1}'.format(k, v if v is not None else 'undefined')
+ for [k, v] in cookie.items()])
+
+ headers = {
+ 'User-Agent': std_headers['User-Agent'],
+ 'Content-Type': 'application/x-www-form-urlencoded',
+ 'Referer': 'https://music.163.com',
+ 'Cookie': cookie,
+ }
+ return ('params={0}'.format(encrypted_params), headers)
+
+ def _call_player_api(self, song_id, bitrate):
+ url = 'https://interface3.music.163.com/eapi/song/enhance/player/url'
+ data, headers = self.make_player_api_request_data_and_headers(song_id, bitrate)
+ try:
+ return self._download_json(
+ url, song_id, data=data.encode('ascii'), headers=headers)
+ except ExtractorError as e:
+ if type(e.cause) in (ValueError, TypeError):
+ # JSON load failure
+ raise
+ except Exception:
+ pass
+ return {}
+
def extract_formats(self, info):
formats = []
+ song_id = info['id']
for song_format in self._FORMATS:
details = info.get(song_format)
if not details:
continue
- song_file_path = '/%s/%s.%s' % (
- self._encrypt(details['dfsId']), details['dfsId'], details['extension'])
-
- # 203.130.59.9, 124.40.233.182, 115.231.74.139, etc is a reverse proxy-like feature
- # from NetEase's CDN provider that can be used if m5.music.126.net does not
- # work, especially for users outside of Mainland China
- # via: https://github.com/JixunMoe/unblock-163/issues/3#issuecomment-163115880
- for host in ('http://m5.music.126.net', 'http://115.231.74.139/m1.music.126.net',
- 'http://124.40.233.182/m1.music.126.net', 'http://203.130.59.9/m1.music.126.net'):
- song_url = host + song_file_path
+
+ bitrate = int_or_none(details.get('bitrate')) or 999000
+ data = self._call_player_api(song_id, bitrate)
+ for song in try_get(data, lambda x: x['data'], list) or []:
+ song_url = try_get(song, lambda x: x['url'])
if self._is_valid_url(song_url, info['id'], 'song'):
formats.append({
'url': song_url,
'ext': details.get('extension'),
- 'abr': float_or_none(details.get('bitrate'), scale=1000),
+ 'abr': float_or_none(song.get('br'), scale=1000),
'format_id': song_format,
- 'filesize': details.get('size'),
- 'asr': details.get('sr')
+ 'filesize': int_or_none(song.get('size')),
+ 'asr': int_or_none(details.get('sr')),
})
- break
return formats
@classmethod
@@ -79,30 +143,16 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
_VALID_URL = r'https?://music\.163\.com/(#/)?song\?id=(?P<id>[0-9]+)'
_TESTS = [{
'url': 'http://music.163.com/#/song?id=32102397',
- 'md5': 'f2e97280e6345c74ba9d5677dd5dcb45',
+ 'md5': '3e909614ce09b1ccef4a3eb205441190',
'info_dict': {
'id': '32102397',
'ext': 'mp3',
- 'title': 'Bad Blood (feat. Kendrick Lamar)',
+ 'title': 'Bad Blood',
'creator': 'Taylor Swift / Kendrick Lamar',
- 'upload_date': '20150517',
- 'timestamp': 1431878400,
- 'description': 'md5:a10a54589c2860300d02e1de821eb2ef',
+ 'upload_date': '20150516',
+ 'timestamp': 1431792000,
+ 'description': 'md5:25fc5f27e47aad975aa6d36382c7833c',
},
- 'skip': 'Blocked outside Mainland China',
- }, {
- 'note': 'No lyrics translation.',
- 'url': 'http://music.163.com/#/song?id=29822014',
- 'info_dict': {
- 'id': '29822014',
- 'ext': 'mp3',
- 'title': '听见下雨的声音',
- 'creator': '周杰伦',
- 'upload_date': '20141225',
- 'timestamp': 1419523200,
- 'description': 'md5:a4d8d89f44656af206b7b2555c0bce6c',
- },
- 'skip': 'Blocked outside Mainland China',
}, {
'note': 'No lyrics.',
'url': 'http://music.163.com/song?id=17241424',
@@ -112,9 +162,9 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'title': 'Opus 28',
'creator': 'Dustin O\'Halloran',
'upload_date': '20080211',
+ 'description': 'md5:f12945b0f6e0365e3b73c5032e1b0ff4',
'timestamp': 1202745600,
},
- 'skip': 'Blocked outside Mainland China',
}, {
'note': 'Has translated name.',
'url': 'http://music.163.com/#/song?id=22735043',
@@ -128,7 +178,6 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'timestamp': 1264608000,
'alt_title': '说出愿望吧(Genie)',
},
- 'skip': 'Blocked outside Mainland China',
}]
def _process_lyrics(self, lyrics_info):
| [dl fail] Is netease module still being maintained?
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.10.29*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.10.29**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
Full command and output:
```
C:\Users\inkux\Desktop>youtube-dl https://music.163.com/#/song?id=33166366 --verbose --proxy ""
[debug] System config: []
[debug] User config: ['--proxy', 'socks5://[censored]/']
[debug] Custom config: []
[debug] Command-line args: ['https://music.163.com/#/song?id=33166366', '--verbose', '--proxy', '']
[debug] Encodings: locale cp936, fs mbcs, out cp936, pref cp936
[debug] youtube-dl version 2018.10.29
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg N-90414-gabf35afb6f, ffprobe N-90414-gabf35afb6f
[debug] Proxy map: {}
[netease:song] 33166366: Downloading song info
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
[netease:song] 33166366: Checking song URL
[netease:song] 33166366: song URL is invalid, skipping
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\YoutubeDL.py", line 792, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\extractor\common.py", line 508, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\extractor\neteasemusic.py", line 164, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\extractor\common.py", line 1287, in _sort_formats
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
------
I've noticed from the issue page that netease module had been down for quite a while in 2016, but since I got an instruction to report this and those issues are pretty aged, I decided to report it anyway.
I was downloading this song which is totally playable in my browser(Google Chrome), and is also downloadable as a mp3 file using netease client (PC, Android), of course using a Chinese IP address.
As you can see youtube-dl correctly recognized the ID of the song from its URL but has been unable to obtain any format.
And if this module is never going to be maintained for a period of time, I think it's a good idea to disable the module if it is believed it will never work again, so no one gets a bug report request when they see netease music in your support sites list and then fails to download using yt-dl, given that netease is a pretty popular music site in China.
------
| Well there I am. Pretty late but I randomly came across this extractor.
There are changes on netease (music.163.com). They changed endpoints and have more than one now. Also their response is different.
Song (data as in m4a) related api is
`https://music.163.com/weapi/song/enhance/player/url/v1?csrf_token=`
song details is now
`https://music.163.com/weapi/v3/song/detail?csrf_token=`
and so on. if there is enough interest I can update the extractor.
@blackjack4494 I for one would be interested. Tried to use it just now and came across this report.
Hi, I think this is still not solved, because the traceback still remains the same, how can we start fixing this? There is also a geolocation restriction in this case.
[This API](https://github.com/Binaryify/NeteaseCloudMusicApi) written in Node.js is able to get the real audio file URL with a given ID and quality. For example, https://api.moeblog.vip/163/ is a deployment of this API, and the real URL of the song with ID 29848621 can be got by `https://api.moeblog.vip/163/?type=url&id=29848621&br=128`. It worked.
Thus, examining the [Node.js API](https://github.com/Binaryify/NeteaseCloudMusicApi) will be helpful.
Geolocation restriction, like paid contents, seems to be unable to bypass, at least by this API.
I'll come back in a week after I finish my exams and try to fix this issue if no one else is going to do it.
I've translated relevant parts of the Node.js API to python, and the following script is able to get the real music file URL with a given id (needs [pycrypto](https://pypi.org/project/pycrypto/)):
```python
import requests
import json
from hashlib import md5
from Crypto.Cipher import AES
import Crypto
import time
from random import randint
HEADERS = {
"User-Agent": "Mozilla/5.0 (Linux; U; Android 9; zh-cn; Redmi Note 8 Build/PKQ1.190616.001) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/71.0.3578.141 Mobile Safari/537.36 XiaoMi/MiuiBrowser/12.5.22",
"Content-Type": "application/x-www-form-urlencoded",
"Referer": "https://music.163.com",
}
KEY = "e82ckenh8dichen8"
def pad(data):
# https://stackoverflow.com/a/10550004/12425329
pad = 16 - len(data) % 16
return data + pad * chr(pad)
def make_data_and_headers(song_id):
KEY = "e82ckenh8dichen8"
URL = "/api/song/enhance/player/url"
cookie = {
"osver": None,
"deviceId": None,
"appver": "8.0.0",
"versioncode": "140",
"mobilename": None,
"buildver": "1623435496",
"resolution": "1920x1080",
"__csrf": "",
"os": "pc",
"channel": None,
"requestId": f"{int(time.time()*1000)}_{randint(0, 1000):04}",
}
text = json.dumps(
{"ids": f"[{song_id}]", "br": 999000, "header": cookie},
separators=(",", ":"),
)
message = f"nobody{URL}use{text}md5forencrypt"
m = md5()
m.update(message.encode("latin1"))
digest = m.hexdigest()
data = f"{URL}-36cd479b6b5-{text}-36cd479b6b5-{digest}"
data = '/api/song/enhance/player/url-36cd479b6b5-{"ids":"[33894312]","br":999000,"header":{"appver":"8.0.0","versioncode":"140","buildver":"1623455100","resolution":"1920x1080","__csrf":"","os":"pc","requestId":"1623455100782_0489"}}-36cd479b6b5-a036727d6cb4f68dc27d0e1962f56eb8'
data = pad(data)
cipher = Crypto.Cipher.AES.new(KEY, AES.MODE_ECB)
encrypted = cipher.encrypt(data.encode("latin1"))
headers = HEADERS
process_v = lambda v: v if v is not None else "undefined"
headers.update(
{"Cookie": "; ".join([f"{k}={process_v(v)}" for [k, v] in cookie.items()])}
)
return (f"params={encrypted.hex().upper()}", headers)
if __name__ == "__main__":
song_id = input("song_id? (default to 491233178)")
if not song_id:
song_id = 491233178
data, headers = make_data_and_headers(491233178)
# print(data)
# print(headers)
r = requests.post(
"https://interface3.music.163.com/eapi/song/enhance/player/url",
data=data, # json.dumps(data, separators=(",", ":")),
headers=HEADERS,
)
print(r.json())
```
The next challenge is to adapt it into youtube-dl | 2022-09-14T04:31:39Z | [] | [] |
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\YoutubeDL.py", line 792, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\extractor\common.py", line 508, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\extractor\neteasemusic.py", line 164, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpadzwnijc\build\youtube_dl\extractor\common.py", line 1287, in _sort_formats
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,887 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-31243 | 7009bb9f3182449ae8cc05cc28b768b63030a485 | diff --git a/youtube_dl/extractor/motherless.py b/youtube_dl/extractor/motherless.py
--- a/youtube_dl/extractor/motherless.py
+++ b/youtube_dl/extractor/motherless.py
@@ -1,3 +1,4 @@
+# coding: utf-8
from __future__ import unicode_literals
import datetime
@@ -71,7 +72,7 @@ class MotherlessIE(InfoExtractor):
'title': 'a/ Hot Teens',
'categories': list,
'upload_date': '20210104',
- 'uploader_id': 'yonbiw',
+ 'uploader_id': 'anonymous',
'thumbnail': r're:https?://.*\.jpg',
'age_limit': 18,
},
@@ -127,7 +128,7 @@ def _real_extract(self, url):
comment_count = webpage.count('class="media-comment-contents"')
uploader_id = self._html_search_regex(
- r'"thumb-member-username">\s+<a href="/m/([^"]+)"',
+ r'''(?s)['"](?:media-meta-member|thumb-member-username)\b[^>]+>\s*<a\b[^>]+\bhref\s*=\s*['"]/m/([^"']+)''',
webpage, 'uploader_id')
categories = self._html_search_meta('keywords', webpage, default=None)
@@ -169,7 +170,7 @@ class MotherlessGroupIE(InfoExtractor):
'description': 'Sex can be funny. Wide smiles,laugh, games, fun of '
'any kind!'
},
- 'playlist_mincount': 9,
+ 'playlist_mincount': 0,
}]
@classmethod
@@ -208,9 +209,9 @@ def _real_extract(self, url):
r'<title>([\w\s]+\w)\s+-', webpage, 'title', fatal=False)
description = self._html_search_meta(
'description', webpage, fatal=False)
- page_count = self._int(self._search_regex(
- r'(\d+)</(?:a|span)><(?:a|span)[^>]+>\s*NEXT',
- webpage, 'page_count'), 'page_count')
+ page_count = str_to_int(self._search_regex(
+ r'(\d+)\s*</(?:a|span)>\s*<(?:a|span)[^>]+(?:>\s*NEXT|\brel\s*=\s*["\']?next)\b',
+ webpage, 'page_count', default='1'))
PAGE_SIZE = 80
def _get_page(idx):
| Motherless ERROR: Unable to extract uploader_id
## Checklist
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.06.06**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-f', 'best', '-ciw', '--verbose', 'https://motherless.com/0EBC4FA']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2021.06.06
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041
[debug] exe versions: ffmpeg git-2020-05-22-38490cb, ffprobe git-2020-05-22-38490cb
[debug] Proxy map: {}
[Motherless] 0EBC4FA: Downloading webpage
ERROR: Unable to extract uploader_id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\YoutubeDL.py", line 815, in wrapper
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\YoutubeDL.py", line 836, in __extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 534, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\motherless.py", line 131, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 1021, in _html_search_regex
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 1012, in _search_regex
youtube_dl.utils.RegexNotFoundError: Unable to extract uploader_id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
## Description
I ran the command:
youtube-dl -f best -ciw --verbose "https://motherless.com/0EBC4FA"
And got the error as written above.
| At lines 129ff. of `extractor/motherless.py`
```
uploader_id = self._html_search_regex(
r'"thumb-member-username">\s+<a href="/m/([^"]+)"',
webpage, 'uploader_id')
```
the final line should be
```
webpage, 'uploader_id', fatal=False)
```
to prevent this, while still retrieving the requested video.
For an improved version, we could also look for the `uploader_id` in page fragment `<span class="username">...</span>` and get resolution from the `data-quality` attribute of the `<video>` tag. Possibly use the `_parse_html5_media_entries()` method.
>
> ## Checklist
>
> * [x] I'm reporting a broken site support
>
> * [x] I've verified that I'm running youtube-dl version **2021.06.06**
>
> * [x] I've checked that all provided URLs are alive and playable in a browser
>
> * [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
>
> * [x] I've searched the bugtracker for similar issues including closed ones
>
>
> ## Verbose log
>
> [debug] System config: []
> [debug] User config: []
> [debug] Custom config: []
> [debug] Command-line args: ['-f', 'best', '-ciw', '--verbose', 'https://motherless.com/0EBC4FA']
> [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
> [debug] youtube-dl version 2021.06.06
> [debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041
> [debug] exe versions: ffmpeg git-2020-05-22-38490cb, ffprobe git-2020-05-22-38490cb
> [debug] Proxy map: {}
> [Motherless] 0EBC4FA: Downloading webpage
> ERROR: Unable to extract uploader_id; please report this issue on [yt-dl.org/bug](https://yt-dl.org/bug) . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
> Traceback (most recent call last):
> File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\YoutubeDL.py", line 815, in wrapper
> File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\YoutubeDL.py", line 836, in __extract_info
> File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 534, in extract
> File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\motherless.py", line 131, in _real_extract
> File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 1021, in _html_search_regex
> File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 1012, in _search_regex
> youtube_dl.utils.RegexNotFoundError: Unable to extract uploader_id; please report this issue on [yt-dl.org/bug](https://yt-dl.org/bug) . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
> ## Description
>
> I ran the command:
> youtube-dl -f best -ciw --verbose "[motherless.com/0EBC4FA](https://motherless.com/0EBC4FA)"
> And got the error as written above.
same
@dirkf `username` is a good idea.
This patch works for me
```
diff --git a/youtube_dl/extractor/motherless.py b/youtube_dl/extractor/motherless.py
index ef1e081f2..3e244caf1 100644
--- a/youtube_dl/extractor/motherless.py
+++ b/youtube_dl/extractor/motherless.py
@@ -127,7 +127,7 @@ class MotherlessIE(InfoExtractor):
comment_count = webpage.count('class="media-comment-contents"')
uploader_id = self._html_search_regex(
- r'"thumb-member-username">\s+<a href="/m/([^"]+)"',
+ r'<span class=\"username\">([^"]+)<\/span>',
webpage, 'uploader_id')
categories = self._html_search_meta('keywords', webpage, default=None)
``` | 2022-09-19T14:57:00Z | [] | [] |
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\YoutubeDL.py", line 815, in wrapper
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\YoutubeDL.py", line 836, in __extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 534, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\motherless.py", line 131, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 1021, in _html_search_regex
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpkqxnwl31\build\youtube_dl\extractor\common.py", line 1012, in _search_regex
youtube_dl.utils.RegexNotFoundError: Unable to extract uploader_id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,888 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-31453 | 195f22f679330549882a8234e7234942893a4902 | diff --git a/youtube_dl/extractor/cammodels.py b/youtube_dl/extractor/cammodels.py
--- a/youtube_dl/extractor/cammodels.py
+++ b/youtube_dl/extractor/cammodels.py
@@ -3,7 +3,6 @@
from .common import InfoExtractor
from ..utils import (
- ExtractorError,
int_or_none,
url_or_none,
)
@@ -20,32 +19,11 @@ class CamModelsIE(InfoExtractor):
def _real_extract(self, url):
user_id = self._match_id(url)
- webpage = self._download_webpage(
- url, user_id, headers=self.geo_verification_headers())
-
- manifest_root = self._html_search_regex(
- r'manifestUrlRoot=([^&\']+)', webpage, 'manifest', default=None)
-
- if not manifest_root:
- ERRORS = (
- ("I'm offline, but let's stay connected", 'This user is currently offline'),
- ('in a private show', 'This user is in a private show'),
- ('is currently performing LIVE', 'This model is currently performing live'),
- )
- for pattern, message in ERRORS:
- if pattern in webpage:
- error = message
- expected = True
- break
- else:
- error = 'Unable to find manifest URL root'
- expected = False
- raise ExtractorError(error, expected=expected)
-
manifest = self._download_json(
- '%s%s.json' % (manifest_root, user_id), user_id)
+ 'https://manifest-server.naiadsystems.com/live/s:%s.json' % user_id, user_id)
formats = []
+ thumbnails = []
for format_id, format_dict in manifest['formats'].items():
if not isinstance(format_dict, dict):
continue
@@ -85,6 +63,13 @@ def _real_extract(self, url):
'preference': -1,
})
else:
+ if format_id == 'jpeg':
+ thumbnails.append({
+ 'url': f['url'],
+ 'width': f['width'],
+ 'height': f['height'],
+ 'format_id': f['format_id'],
+ })
continue
formats.append(f)
self._sort_formats(formats)
@@ -92,6 +77,7 @@ def _real_extract(self, url):
return {
'id': user_id,
'title': self._live_title(user_id),
+ 'thumbnails': thumbnails,
'is_live': True,
'formats': formats,
'age_limit': 18
| [cammodels] ExtractorError: Unable to find manifest URL root
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2019.10.16**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
LATEST VERSION (broken):
```
# ./youtube-dl -vg https://www.cammodels.com/cam/agnesss
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-vg', u'https://www.cammodels.com/cam/agnesss']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.10.16
[debug] Python version 2.7.16 (CPython) - Linux-5.3.6-050306-generic-x86_64-with-Ubuntu-19.04-disco
[debug] exe versions: ffmpeg 4.1.3, ffprobe 4.1.3, rtmpdump 2.4
[debug] Proxy map: {}
ERROR: Unable to find manifest URL root; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "./youtube-dl/youtube_dl/extractor/common.py", line 530, in extract
ie_result = self._real_extract(url)
File "./youtube-dl/youtube_dl/extractor/cammodels.py", line 43, in _real_extract
raise ExtractorError(error, expected=expected)
ExtractorError: Unable to find manifest URL root; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
LAST KNOWN WORKING:
```
# ./youtube-dl-lastok -vg https://www.cammodels.com/cam/agnesss
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-vg', u'https://www.cammodels.com/cam/agnesss']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.06.27
[debug] Python version 2.7.16 (CPython) - Linux-5.3.6-050306-generic-x86_64-with-Ubuntu-19.04-disco
[debug] exe versions: ffmpeg 4.1.3, ffprobe 4.1.3, rtmpdump 2.4
[debug] Proxy map: {}
[debug] Default format spec: bestvideo+bestaudio/best
rtmp://sea1c-edge-33.naiadsystems.com:1936/live/01ead87f-66cc-465c-9cca-5f91e2505708_2000_1280x720_56
```
## Description
* You need to use a model that is online: change the name in the URL accordingly
* One could make it work again by reverting the changes in utils.py ("random_user_agent"):
```
utils.py, line 1669:
> 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0',
< 'User-Agent': random_user_agent(),
```
* not sure if OS updates or web site changes made a difference because it has been working with a version of youtube-dl _2019.06.27 < ok < 2019.09.28_ in September where utils.py was already the newer version
* no difference in using python2 or python3
Thanks for your work.
| 2023-01-06T04:51:02Z | [] | [] |
Traceback (most recent call last):
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "./youtube-dl/youtube_dl/extractor/common.py", line 530, in extract
ie_result = self._real_extract(url)
File "./youtube-dl/youtube_dl/extractor/cammodels.py", line 43, in _real_extract
raise ExtractorError(error, expected=expected)
ExtractorError: Unable to find manifest URL root; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,895 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-3202 | d24a2b20b4908b01f9b5bfa88cd293c189fe6475 | diff --git a/youtube_dl/extractor/fc2.py b/youtube_dl/extractor/fc2.py
--- a/youtube_dl/extractor/fc2.py
+++ b/youtube_dl/extractor/fc2.py
@@ -7,14 +7,16 @@
from .common import InfoExtractor
from ..utils import (
ExtractorError,
+ compat_urllib_parse,
compat_urllib_request,
compat_urlparse,
)
class FC2IE(InfoExtractor):
- _VALID_URL = r'^http://video\.fc2\.com/((?P<lang>[^/]+)/)?content/(?P<id>[^/]+)'
+ _VALID_URL = r'^http://video\.fc2\.com/((?P<lang>[^/]+)/)?(a/)?content/(?P<id>[^/]+)'
IE_NAME = 'fc2'
+ _NETRC_MACHINE = 'fc2'
_TEST = {
'url': 'http://video.fc2.com/en/content/20121103kUan1KHs',
'md5': 'a6ebe8ebe0396518689d963774a54eb7',
@@ -25,17 +27,53 @@ class FC2IE(InfoExtractor):
},
}
+ #def _real_initialize(self):
+ # self._login()
+
+ def _login(self):
+ (username, password) = self._get_login_info()
+ if (username is None) or (password is None):
+ self.to_screen('unable to log in: will be downloading in non authorized mode') # report_warning
+ return False
+
+ # Log in
+ login_form_strs = {
+ 'email': username,
+ 'password': password,
+ 'done': 'video',
+ 'Submit': ' Login ',
+ }
+
+ # Convert to UTF-8 *before* urlencode because Python 2.x's urlencode
+ # chokes on unicode
+ login_form = dict((k.encode('utf-8'), v.encode('utf-8')) for k, v in login_form_strs.items())
+ login_data = compat_urllib_parse.urlencode(login_form).encode('utf-8')
+ request = compat_urllib_request.Request(
+ 'https://secure.id.fc2.com/index.php?mode=login&switch_language=en', login_data)
+
+ login_results = self._download_webpage(request, None, note='Logging in', errnote='Unable to log in')
+ if 'mode=redirect&login=done' not in login_results:
+ self.to_screen('unable to log in: bad username or password') # report_warning
+ return False
+
+ # this is also needed
+ login_redir = compat_urllib_request.Request('http://id.fc2.com/?mode=redirect&login=done')
+ redir_res = self._download_webpage(login_redir, None, note='Login redirect', errnote='Something is not right')
+
+ return True
+
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id)
self._downloader.cookiejar.clear_session_cookies() # must clear
+ self._login()
title = self._og_search_title(webpage)
thumbnail = self._og_search_thumbnail(webpage)
- refer = url.replace('/content/', '/a/content/')
+ refer = (url if '/a/content/' in url else url.replace('/content/', '/a/content/'));
mimi = hashlib.md5((video_id + '_gGddgPfeaf_gzyr').encode('utf-8')).hexdigest()
info_url = (
@@ -47,7 +85,12 @@ def _real_extract(self, url):
info = compat_urlparse.parse_qs(info_webpage)
if 'err_code' in info:
- raise ExtractorError('Error code: %s' % info['err_code'][0])
+ #raise ExtractorError('Error code: %s' % info['err_code'][0])
+ # most of the time we can still download wideo even if err_code is 403 or 602
+ self.to_screen('Error code was: %s... but still trying' % info['err_code'][0]) # report_warning
+
+ if 'filepath' not in info:
+ raise ExtractorError('Cannot download file. Are you logged?')
video_url = info['filepath'][0] + '?mid=' + info['mid'][0]
title_info = info.get('title')
| [fc2] 403 error
```
youtube-dl http://video.fc2.com/en/content/20130113eqtNRAv5 -v
[debug] System config: []
[debug] User config: ['--age-limit', '17']
[debug] Command-line args: ['http://video.fc2.com/en/content/20130113eqtNRAv5', '-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.05.13
[debug] Git HEAD: 1800eee
[debug] Python version 2.7.6 - Linux-3.13-1-amd64-x86_64-with-debian-jessie-sid
[debug] Proxy map: {}
[fc2] 20130113eqtNRAv5: Downloading webpage
[fc2] 20130113eqtNRAv5: Downloading info page
ERROR: Error code: 403; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "youtube_dl/YoutubeDL.py", line 516, in extract_info
ie_result = ie.extract(url)
File "youtube_dl/extractor/common.py", line 161, in extract
return self._real_extract(url)
File "youtube_dl/extractor/fc2.py", line 50, in _real_extract
raise ExtractorError('Error code: %s' % info['err_code'][0])
ExtractorError: Error code: 403; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
```
It looks like the problem is the missing PHPSESSID cookie. We should fix cookie handling in fc2.
| Getting same error on some FC2 videos, on Windows with youtube-dl version 2014.06.09.
<pre><code>youtube-dl -u EMAIL -p PASSWORD http://video.fc2.com/a/content/20140519VeQg1FwP -v
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-u', '<PRIVATE>', '-p', '<PRIVATE>', 'http://video.fc2.com/a/content/20140519VeQg1FwP', '-v']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2014.06.09
[debug] Python version 2.7.5 - Windows-8-6.2.9200
[debug] Proxy map: {}
[fc2] 20140519VeQg1FwP: Downloading webpage
[fc2] 20140519VeQg1FwP: Downloading info page
ERROR: Error code: 403; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 516, in extract_info
File "youtube_dl\extractor\common.pyo", line 167, in extract
File "youtube_dl\extractor\fc2.pyo", line 50, in _real_extract
ExtractorError: Error code: 403; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.</code></pre>
| 2014-07-06T00:08:43Z | [] | [] |
Traceback (most recent call last):
File "youtube_dl/YoutubeDL.py", line 516, in extract_info
ie_result = ie.extract(url)
File "youtube_dl/extractor/common.py", line 161, in extract
return self._real_extract(url)
File "youtube_dl/extractor/fc2.py", line 50, in _real_extract
raise ExtractorError('Error code: %s' % info['err_code'][0])
ExtractorError: Error code: 403; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
| 18,900 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-4009 | 8f3b5397a761d68122bc1bd66d049fbbe31289a2 | diff --git a/youtube_dl/extractor/generic.py b/youtube_dl/extractor/generic.py
--- a/youtube_dl/extractor/generic.py
+++ b/youtube_dl/extractor/generic.py
@@ -380,6 +380,17 @@ class GenericIE(InfoExtractor):
'uploader': 'education-portal.com',
},
},
+ {
+ 'url': 'http://thoughtworks.wistia.com/medias/uxjb0lwrcz',
+ 'md5': 'baf49c2baa8a7de5f3fc145a8506dcd4',
+ 'info_dict': {
+ 'id': 'uxjb0lwrcz',
+ 'ext': 'mp4',
+ 'title': 'Conversation about Hexagonal Rails Part 1 - ThoughtWorks',
+ 'duration': 1715.0,
+ 'uploader': 'thoughtworks.wistia.com',
+ },
+ },
]
def report_following_redirect(self, new_url):
@@ -652,7 +663,7 @@ def _playlist_from_matches(matches, getter, ie=None):
# Look for embedded Wistia player
match = re.search(
- r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:fast\.)?wistia\.net/embed/iframe/.+?)\1', webpage)
+ r'(?:<meta content|<iframe[^>]+?src)=(["\'])(?P<url>(?:https?:)?//(?:fast\.)?wistia\.net/embed/iframe/.+?)\1', webpage)
if match:
embed_url = self._proto_relative_url(
unescapeHTML(match.group('url')))
@@ -664,6 +675,7 @@ def _playlist_from_matches(matches, getter, ie=None):
'title': video_title,
'id': video_id,
}
+
match = re.search(r'(?:id=["\']wistia_|data-wistia-?id=["\']|Wistia\.embed\(["\'])(?P<id>[^"\']+)', webpage)
if match:
return {
| Site Support Request: thoughtworks@wistia
According to bug reports other sites using Wistia seeem to be supported by yt-dl, but the videos by Thoughtworks don't work yet, e.g. the one at http://thoughtworks.wistia.com/medias/uxjb0lwrcz :
```
youtube-dl -v http://thoughtworks.wistia.com/medias/uxjb0lwrcz
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://thoughtworks.wistia.com/medias/uxjb0lwrcz']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.10.18
[debug] Python version 2.7.6 - Darwin-14.0.0-x86_64-i386-64bit
[debug] Proxy map: {}
[generic] uxjb0lwrcz: Requesting header
WARNING: Falling back on generic information extractor.
[generic] uxjb0lwrcz: Downloading webpage
[generic] uxjb0lwrcz: Extracting information
[Wistia] video: Downloading JSON metadata
ERROR: Error while getting the playlist
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 524, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 193, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/wistia.py", line 33, in _real_extract
expected=True)
ExtractorError: Error while getting the playlist
```
| 2014-10-23T15:00:29Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 524, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 193, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/wistia.py", line 33, in _real_extract
expected=True)
ExtractorError: Error while getting the playlist
| 18,923 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-4025 | e82c1e9a6e709502acc683fb90864f7611d96d07 | diff --git a/youtube_dl/extractor/motherless.py b/youtube_dl/extractor/motherless.py
--- a/youtube_dl/extractor/motherless.py
+++ b/youtube_dl/extractor/motherless.py
@@ -11,14 +11,14 @@
class MotherlessIE(InfoExtractor):
- _VALID_URL = r'http://(?:www\.)?motherless\.com/(?P<id>[A-Z0-9]+)'
+ _VALID_URL = r'http://(?:www\.)?motherless\.com/(?:g/[a-z0-9_]+/)?(?P<id>[A-Z0-9]+)'
_TESTS = [
{
'url': 'http://motherless.com/AC3FFE1',
- 'md5': '5527fef81d2e529215dad3c2d744a7d9',
+ 'md5': '310f62e325a9fafe64f68c0bccb6e75f',
'info_dict': {
'id': 'AC3FFE1',
- 'ext': 'flv',
+ 'ext': 'mp4',
'title': 'Fucked in the ass while playing PS3',
'categories': ['Gaming', 'anal', 'reluctant', 'rough', 'Wife'],
'upload_date': '20100913',
@@ -40,6 +40,20 @@ class MotherlessIE(InfoExtractor):
'thumbnail': 're:http://.*\.jpg',
'age_limit': 18,
}
+ },
+ {
+ 'url': 'http://motherless.com/g/cosplay/633979F',
+ 'md5': '0b2a43f447a49c3e649c93ad1fafa4a0',
+ 'info_dict': {
+ 'id': '633979F',
+ 'ext': 'mp4',
+ 'title': 'Turtlette',
+ 'categories': ['superheroine heroine superher'],
+ 'upload_date': '20140827',
+ 'uploader_id': 'shade0230',
+ 'thumbnail': 're:http://.*\.jpg',
+ 'age_limit': 18,
+ }
}
]
| motherless extractor - test 0 fails
```
$ python ~/projects/youtube-dl/test/test_download.py TestDownload.test_Motherless
[Motherless] AC3FFE1: Downloading webpage
[info] Writing video description metadata as JSON to: AC3FFE1.info.json
[debug] Invoking downloader on 'http://s17.motherlessmedia.com/dev386/0/572/287/0572287847.mp4/5cb6d38eccba71d7f6bb2ef260997c3d/544A96C0.mp4'
[download] Destination: AC3FFE1.mp4
[download] 100% of 10.00KiB in 00:00
F
======================================================================
FAIL: test_Motherless (__main__.TestDownload)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/crabman/projects/youtube-dl/test/test_download.py", line 170, in test_template
self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)
AssertionError: False is not true : Missing file AC3FFE1.flv
----------------------------------------------------------------------
Ran 1 test in 1.690s
FAILED (failures=1)
```
Apparently that video's page no longer give you flv, but instead gives you mp4 file. I am not sure why it happened. Maybe motherless doesn't serve flv files anymore, maybe it still does for some videos - I don't know.
| 2014-10-24T17:30:02Z | [] | [] |
Traceback (most recent call last):
File "/home/crabman/projects/youtube-dl/test/test_download.py", line 170, in test_template
self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)
AssertionError: False is not true : Missing file AC3FFE1.flv
| 18,925 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-4388 | 0df23ba9f9ddef005ec5f592ccb43b41564b5767 | diff --git a/youtube_dl/extractor/adultswim.py b/youtube_dl/extractor/adultswim.py
--- a/youtube_dl/extractor/adultswim.py
+++ b/youtube_dl/extractor/adultswim.py
@@ -2,123 +2,147 @@
from __future__ import unicode_literals
import re
+import json
from .common import InfoExtractor
+from ..utils import (
+ ExtractorError,
+)
class AdultSwimIE(InfoExtractor):
- _VALID_URL = r'https?://video\.adultswim\.com/(?P<path>.+?)(?:\.html)?(?:\?.*)?(?:#.*)?$'
- _TEST = {
- 'url': 'http://video.adultswim.com/rick-and-morty/close-rick-counters-of-the-rick-kind.html?x=y#title',
+ _VALID_URL = r'https?://(?:www\.)?adultswim\.com/videos/(?P<is_playlist>playlists/)?(?P<show_path>[^/]+)/(?P<episode_path>[^/?#]+)/?'
+
+ _TESTS = [{
+ 'url': 'http://adultswim.com/videos/rick-and-morty/pilot',
'playlist': [
- {
- 'md5': '4da359ec73b58df4575cd01a610ba5dc',
+ {
+ 'md5': '247572debc75c7652f253c8daa51a14d',
'info_dict': {
- 'id': '8a250ba1450996e901453d7f02ca02f5',
+ 'id': 'rQxZvXQ4ROaSOqq-or2Mow-0',
'ext': 'flv',
- 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 1',
- 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?',
- 'uploader': 'Rick and Morty',
- 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg'
- }
+ 'title': 'Rick and Morty - Pilot Part 1',
+ 'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. "
+ },
},
{
- 'md5': 'ffbdf55af9331c509d95350bd0cc1819',
+ 'md5': '77b0e037a4b20ec6b98671c4c379f48d',
'info_dict': {
- 'id': '8a250ba1450996e901453d7f4bd102f6',
+ 'id': 'rQxZvXQ4ROaSOqq-or2Mow-3',
'ext': 'flv',
- 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 2',
- 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?',
- 'uploader': 'Rick and Morty',
- 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg'
- }
- },
- {
- 'md5': 'b92409635540304280b4b6c36bd14a0a',
- 'info_dict': {
- 'id': '8a250ba1450996e901453d7fa73c02f7',
- 'ext': 'flv',
- 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 3',
- 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?',
- 'uploader': 'Rick and Morty',
- 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg'
- }
+ 'title': 'Rick and Morty - Pilot Part 4',
+ 'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. "
+ },
},
+ ],
+ 'info_dict': {
+ 'title': 'Rick and Morty - Pilot',
+ 'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. "
+ }
+ }, {
+ 'url': 'http://www.adultswim.com/videos/playlists/american-parenting/putting-francine-out-of-business/',
+ 'playlist': [
{
- 'md5': 'e8818891d60e47b29cd89d7b0278156d',
+ 'md5': '2eb5c06d0f9a1539da3718d897f13ec5',
'info_dict': {
- 'id': '8a250ba1450996e901453d7fc8ba02f8',
+ 'id': '-t8CamQlQ2aYZ49ItZCFog-0',
'ext': 'flv',
- 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 4',
- 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?',
- 'uploader': 'Rick and Morty',
- 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg'
- }
+ 'title': 'American Dad - Putting Francine Out of Business',
+ 'description': 'Stan hatches a plan to get Francine out of the real estate business.Watch more American Dad on [adult swim].'
+ },
}
- ]
- }
-
- _video_extensions = {
- '3500': 'flv',
- '640': 'mp4',
- '150': 'mp4',
- 'ipad': 'm3u8',
- 'iphone': 'm3u8'
- }
- _video_dimensions = {
- '3500': (1280, 720),
- '640': (480, 270),
- '150': (320, 180)
- }
+ ],
+ 'info_dict': {
+ 'title': 'American Dad - Putting Francine Out of Business',
+ 'description': 'Stan hatches a plan to get Francine out of the real estate business.Watch more American Dad on [adult swim].'
+ },
+ }]
+
+ @staticmethod
+ def find_video_info(collection, slug):
+ for video in collection.get('videos'):
+ if video.get('slug') == slug:
+ return video
+
+ @staticmethod
+ def find_collection_by_linkURL(collections, linkURL):
+ for collection in collections:
+ if collection.get('linkURL') == linkURL:
+ return collection
+
+ @staticmethod
+ def find_collection_containing_video(collections, slug):
+ for collection in collections:
+ for video in collection.get('videos'):
+ if video.get('slug') == slug:
+ return collection, video
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
- video_path = mobj.group('path')
-
- webpage = self._download_webpage(url, video_path)
- episode_id = self._html_search_regex(
- r'<link rel="video_src" href="http://i\.adultswim\.com/adultswim/adultswimtv/tools/swf/viralplayer.swf\?id=([0-9a-f]+?)"\s*/?\s*>',
- webpage, 'episode_id')
- title = self._og_search_title(webpage)
-
- index_url = 'http://asfix.adultswim.com/asfix-svc/episodeSearch/getEpisodesByIDs?networkName=AS&ids=%s' % episode_id
- idoc = self._download_xml(index_url, title, 'Downloading episode index', 'Unable to download episode index')
-
- episode_el = idoc.find('.//episode')
- show_title = episode_el.attrib.get('collectionTitle')
- episode_title = episode_el.attrib.get('title')
- thumbnail = episode_el.attrib.get('thumbnailUrl')
- description = episode_el.find('./description').text.strip()
+ show_path = mobj.group('show_path')
+ episode_path = mobj.group('episode_path')
+ is_playlist = True if mobj.group('is_playlist') else False
+
+ webpage = self._download_webpage(url, episode_path)
+
+ # Extract the value of `bootstrappedData` from the Javascript in the page.
+ bootstrappedDataJS = self._search_regex(r'var bootstrappedData = ({.*});', webpage, episode_path)
+
+ try:
+ bootstrappedData = json.loads(bootstrappedDataJS)
+ except ValueError as ve:
+ errmsg = '%s: Failed to parse JSON ' % episode_path
+ raise ExtractorError(errmsg, cause=ve)
+
+ # Downloading videos from a /videos/playlist/ URL needs to be handled differently.
+ # NOTE: We are only downloading one video (the current one) not the playlist
+ if is_playlist:
+ collections = bootstrappedData['playlists']['collections']
+ collection = self.find_collection_by_linkURL(collections, show_path)
+ video_info = self.find_video_info(collection, episode_path)
+
+ show_title = video_info['showTitle']
+ segment_ids = [video_info['videoPlaybackID']]
+ else:
+ collections = bootstrappedData['show']['collections']
+ collection, video_info = self.find_collection_containing_video(collections, episode_path)
+
+ show = bootstrappedData['show']
+ show_title = show['title']
+ segment_ids = [clip['videoPlaybackID'] for clip in video_info['clips']]
+
+ episode_id = video_info['id']
+ episode_title = video_info['title']
+ episode_description = video_info['description']
+ episode_duration = video_info.get('duration')
entries = []
- segment_els = episode_el.findall('./segments/segment')
+ for part_num, segment_id in enumerate(segment_ids):
+ segment_url = 'http://www.adultswim.com/videos/api/v0/assets?id=%s&platform=mobile' % segment_id
- for part_num, segment_el in enumerate(segment_els):
- segment_id = segment_el.attrib.get('id')
- segment_title = '%s %s part %d' % (show_title, episode_title, part_num + 1)
- thumbnail = segment_el.attrib.get('thumbnailUrl')
- duration = segment_el.attrib.get('duration')
+ segment_title = '%s - %s' % (show_title, episode_title)
+ if len(segment_ids) > 1:
+ segment_title += ' Part %d' % (part_num + 1)
- segment_url = 'http://asfix.adultswim.com/asfix-svc/episodeservices/getCvpPlaylist?networkName=AS&id=%s' % segment_id
idoc = self._download_xml(
segment_url, segment_title,
'Downloading segment information', 'Unable to download segment information')
+ segment_duration = idoc.find('.//trt').text.strip()
+
formats = []
file_els = idoc.findall('.//files/file')
for file_el in file_els:
bitrate = file_el.attrib.get('bitrate')
- type = file_el.attrib.get('type')
- width, height = self._video_dimensions.get(bitrate, (None, None))
+ ftype = file_el.attrib.get('type')
+
formats.append({
- 'format_id': '%s-%s' % (bitrate, type),
- 'url': file_el.text,
- 'ext': self._video_extensions.get(bitrate, 'mp4'),
+ 'format_id': '%s_%s' % (bitrate, ftype),
+ 'url': file_el.text.strip(),
# The bitrate may not be a number (for example: 'iphone')
'tbr': int(bitrate) if bitrate.isdigit() else None,
- 'height': height,
- 'width': width
+ 'quality': 1 if ftype == 'hd' else -1
})
self._sort_formats(formats)
@@ -127,18 +151,16 @@ def _real_extract(self, url):
'id': segment_id,
'title': segment_title,
'formats': formats,
- 'uploader': show_title,
- 'thumbnail': thumbnail,
- 'duration': duration,
- 'description': description
+ 'duration': segment_duration,
+ 'description': episode_description
})
return {
'_type': 'playlist',
'id': episode_id,
- 'display_id': video_path,
+ 'display_id': episode_path,
'entries': entries,
- 'title': '%s %s' % (show_title, episode_title),
- 'description': description,
- 'thumbnail': thumbnail
+ 'title': '%s - %s' % (show_title, episode_title),
+ 'description': episode_description,
+ 'duration': episode_duration
}
| Adultswim.com not able to download
```
$youtube-dl --version
2014.11.26
$ youtube-dl --verbose http://www.adultswim.com/videos/lucy-the-daughter-of-the-devil/hes-not-the-messiah-hes-a-dj/
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'http://www.adultswim.com/videos/lucy-the-daughter-of-the-devil/hes-not-the-messiah-hes-a-dj/']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.11.26
[debug] Python version 3.4.2 - Linux-3.17.3-1-ARCH-x86_64-with-arch-Arch-Linux
[debug] exe versions: ffmpeg 2.4.3, ffprobe 2.4.3, rtmpdump 2.4
[debug] Proxy map: {}
[generic] hes-not-the-messiah-hes-a-dj: Requesting header
WARNING: Falling back on generic information extractor.
[generic] hes-not-the-messiah-hes-a-dj: Downloading webpage
[generic] hes-not-the-messiah-hes-a-dj: Extracting information
ERROR: Unsupported URL: http://www.adultswim.com/videos/lucy-the-daughter-of-the-devil/hes-not-the-messiah-hes-a-dj/; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 553, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 240, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1044, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
youtube_dl.utils.ExtractorError: Unsupported URL: http://www.adultswim.com/videos/lucy-the-daughter-of-the-devil/hes-not-the-messiah-hes-a-dj/; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
Support adultswim.com
For example http://video.adultswim.com/dinner-with-friends-with-brett-gelman-and-friends/dinner-with-friends-with-brett-gelman-and-friends.html
| I'm pretty sure that the site that you are trying to download from is not supported.
@nighthawk702 We _do_ support adultswim (it's in the [list of supported sites](https://rg3.github.io/youtube-dl/supportedsites.html)). This seems to be a new way of video distribution for them. By the way, there's nothing wrong with filing issues asking for support for new sites.
| 2014-12-06T06:10:22Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 553, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 240, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1044, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
youtube_dl.utils.ExtractorError: Unsupported URL: http://www.adultswim.com/videos/lucy-the-daughter-of-the-devil/hes-not-the-messiah-hes-a-dj/; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,931 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-4389 | 0df23ba9f9ddef005ec5f592ccb43b41564b5767 | diff --git a/youtube_dl/extractor/nba.py b/youtube_dl/extractor/nba.py
--- a/youtube_dl/extractor/nba.py
+++ b/youtube_dl/extractor/nba.py
@@ -10,7 +10,7 @@
class NBAIE(InfoExtractor):
- _VALID_URL = r'https?://(?:watch\.|www\.)?nba\.com/(?:nba/)?video(?P<id>/[^?]*?)(?:/index\.html)?(?:\?.*)?$'
+ _VALID_URL = r'https?://(?:watch\.|www\.)?nba\.com/(?:nba/)?video(?P<id>/[^?]*?)/?(?:/index\.html)?(?:\?.*)?$'
_TEST = {
'url': 'http://www.nba.com/video/games/nets/2012/12/04/0021200253-okc-bkn-recap.nba/index.html',
'md5': 'c0edcfc37607344e2ff8f13c378c88a4',
| NBA URLs FAIL without INDEX.HTML
the NBA extractor does not work if URL does not explicitly end with index.html (which appears to be the default)
URL: http://www.nba.com/video/games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/
C:>youtube-dl -v http://www.nba.com/video/games/hornets/2014/12/05/0021400276-n
yk-cha-play5.nba/
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.nba.com/video/games/hornets/2014/1
2/05/0021400276-nyk-cha-play5.nba/']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2014.12.06.1
[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-40824-
[debug] Proxy map: {}
[NBA] /games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/: Downloading webpa
ge
[debug] Invoking downloader on u'http://ht-mobile.cdn.turner.com/nba/big/games/h
ornets/2014/12/05/0021400276-nyk-cha-play5.nba/_nba_1280x720.mp4'
ERROR: unable to download video data: HTTP Error 404: Not Found
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 1091, in process_info
File "youtube_dl\YoutubeDL.pyo", line 1067, in dl
File "youtube_dl\downloader\common.pyo", line 294, in download
File "youtube_dl\downloader\http.pyo", line 66, in real_download
File "youtube_dl\YoutubeDL.pyo", line 1325, in urlopen
File "urllib2.pyo", line 410, in open
File "urllib2.pyo", line 523, in http_response
File "urllib2.pyo", line 448, in error
File "urllib2.pyo", line 382, in _call_chain
File "urllib2.pyo", line 531, in http_error_default
HTTPError: HTTP Error 404: Not Found
(same vid but with index.html)
URL: http://www.nba.com/video/games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/index.html
C:>youtube-dl -v http://www.nba.com/video/games/hornets/2014/12/05/0021400276-n
yk-cha-play5.nba/index.html
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.nba.com/video/games/hornets/2014/1
2/05/0021400276-nyk-cha-play5.nba/index.html']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2014.12.06.1
[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-40824-
[debug] Proxy map: {}
[NBA] /games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba: Downloading webpag
e
[debug] Invoking downloader on u'http://ht-mobile.cdn.turner.com/nba/big/games/h
ornets/2014/12/05/0021400276-nyk-cha-play5.nba_nba_1280x720.mp4'
[download] Destination: Walker From Behind-0021400276-nyk-cha-play5.nba.mp4
[download] 100% of 5.76MiB in 00:04
| 2014-12-06T09:54:00Z | [] | [] |
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 1091, in process_info
File "youtube_dl\YoutubeDL.pyo", line 1067, in dl
File "youtube_dl\downloader\common.pyo", line 294, in download
File "youtube_dl\downloader\http.pyo", line 66, in real_download
File "youtube_dl\YoutubeDL.pyo", line 1325, in urlopen
File "urllib2.pyo", line 410, in open
File "urllib2.pyo", line 523, in http_response
File "urllib2.pyo", line 448, in error
File "urllib2.pyo", line 382, in _call_chain
File "urllib2.pyo", line 531, in http_error_default
HTTPError: HTTP Error 404: Not Found
| 18,932 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-4394 | 6d0886204a920e64606688b1217835d10e47d281 | diff --git a/youtube_dl/extractor/prosiebensat1.py b/youtube_dl/extractor/prosiebensat1.py
--- a/youtube_dl/extractor/prosiebensat1.py
+++ b/youtube_dl/extractor/prosiebensat1.py
@@ -8,6 +8,7 @@
from ..utils import (
compat_urllib_parse,
unified_strdate,
+ ExtractorError,
)
@@ -85,7 +86,7 @@ class ProSiebenSat1IE(InfoExtractor):
'ext': 'mp4',
'title': 'Im Interview: Kai Wiesinger',
'description': 'md5:e4e5370652ec63b95023e914190b4eb9',
- 'upload_date': '20140225',
+ 'upload_date': '20140203',
'duration': 522.56,
},
'params': {
@@ -100,7 +101,7 @@ class ProSiebenSat1IE(InfoExtractor):
'ext': 'mp4',
'title': 'Jagd auf Fertigkost im Elsthal - Teil 2',
'description': 'md5:2669cde3febe9bce13904f701e774eb6',
- 'upload_date': '20140225',
+ 'upload_date': '20141014',
'duration': 2410.44,
},
'params': {
@@ -152,12 +153,22 @@ class ProSiebenSat1IE(InfoExtractor):
'skip_download': True,
},
},
+ {
+ 'url': 'http://www.prosieben.de/tv/joko-gegen-klaas/videos/playlists/episode-8-ganze-folge-playlist',
+ 'info_dict': {
+ 'id': '439664',
+ 'title': 'Episode 8 - Ganze Folge - Playlist',
+ 'description': 'Das finale und härteste Duell aller Zeiten ist vorbei! Der Weltmeister für dieses Jahr steht! Alle packenden Duelle der achten Episode von "Joko gegen Klaas - das Duell um die Welt" seht ihr hier noch einmal in voller Länge!',
+ },
+ 'playlist_count': 2,
+ },
]
_CLIPID_REGEXES = [
r'"clip_id"\s*:\s+"(\d+)"',
r'clipid: "(\d+)"',
r'clip[iI]d=(\d+)',
+ r"'itemImageUrl'\s*:\s*'/dynamic/thumbnails/full/\d+/(\d+)",
]
_TITLE_REGEXES = [
r'<h2 class="subtitle" itemprop="name">\s*(.+?)</h2>',
@@ -178,11 +189,48 @@ class ProSiebenSat1IE(InfoExtractor):
r'<span style="padding-left: 4px;line-height:20px; color:#404040">(\d{2}\.\d{2}\.\d{4})</span>',
r'(\d{2}\.\d{2}\.\d{4}) \| \d{2}:\d{2} Min<br/>',
]
+ _ITEM_TYPE_REGEXES = [
+ r"'itemType'\s*:\s*'([^']*)'",
+ ]
+ _ITEM_ID_REGEXES = [
+ r"'itemId'\s*:\s*'([^']*)'",
+ ]
+ _PLAYLIST_CLIPS_REGEXES = [
+ r'data-qvt=.+?<a href="([^"]+)"',
+ ]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
+ item_type = self._html_search_regex(self._ITEM_TYPE_REGEXES, webpage, 'item type', default='CLIP')
+ if item_type == 'CLIP':
+ return self._clip_extract(url, webpage)
+ elif item_type == 'PLAYLIST':
+ playlist_id = self._html_search_regex(self._ITEM_ID_REGEXES, webpage, 'playlist id')
+
+ for regex in self._PLAYLIST_CLIPS_REGEXES:
+ playlist_clips = re.findall(regex, webpage, re.DOTALL)
+ if playlist_clips:
+ title = self._html_search_regex(self._TITLE_REGEXES, webpage, 'title')
+ description = self._html_search_regex(self._DESCRIPTION_REGEXES, webpage, 'description', fatal=False)
+ root_url = re.match('(.+?//.+?)/', url).group(1)
+
+ return {
+ '_type': 'playlist',
+ 'id': playlist_id,
+ 'title': title,
+ 'description': description,
+ 'entries': [self._clip_extract(root_url + clip_path) for clip_path in playlist_clips]
+ }
+ else:
+ raise ExtractorError('Unknown item type "%s"' % item_type)
+
+ def _clip_extract(self, url, webpage=None):
+ if webpage is None:
+ video_id = self._match_id(url)
+ webpage = self._download_webpage(url, video_id)
+
clip_id = self._html_search_regex(self._CLIPID_REGEXES, webpage, 'clip id')
access_token = 'testclient'
| [prosiebensat1] Unable to download playlist
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'http://www.prosieben.de/tv/joko-gegen-klaas/videos/playlists/episode-8-ganze-folge-playlist']
[debug] Encodings: locale UTF-8, fs utf-8, out None, pref UTF-8
[debug] youtube-dl version 2014.12.01
[debug] Python version 2.7.6 - Darwin-14.0.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 2.4.4, ffprobe 2.4.4, rtmpdump 2.4
[debug] Proxy map: {}
[prosiebensat1] tv/joko-gegen-klaas/videos/playlists/episode-8-ganze-folge-playlist: Downloading webpage
ERROR: Unable to extract clip id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 553, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 241, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/prosiebensat1.py", line 186, in _real_extract
clip_id = self._html_search_regex(self._CLIPID_REGEXES, webpage, 'clip id')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 491, in _html_search_regex
res = self._search_regex(pattern, string, name, default, fatal, flags, group)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 481, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract clip id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| I guess downloading playlists isn't supported atm?
| 2014-12-06T18:31:52Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 553, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 241, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/prosiebensat1.py", line 186, in _real_extract
clip_id = self._html_search_regex(self._CLIPID_REGEXES, webpage, 'clip id')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 491, in _html_search_regex
res = self._search_regex(pattern, string, name, default, fatal, flags, group)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 481, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract clip id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,933 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-4395 | 0ef4d4ab7e15833031cd43211a38464a9ab9aa17 | diff --git a/youtube_dl/YoutubeDL.py b/youtube_dl/YoutubeDL.py
--- a/youtube_dl/YoutubeDL.py
+++ b/youtube_dl/YoutubeDL.py
@@ -942,8 +942,12 @@ def process_info(self, info_dict):
if self.params.get('forceid', False):
self.to_stdout(info_dict['id'])
if self.params.get('forceurl', False):
- # For RTMP URLs, also include the playpath
- self.to_stdout(info_dict['url'] + info_dict.get('play_path', ''))
+ if info_dict.get('requested_formats') is not None:
+ for f in info_dict['requested_formats']:
+ self.to_stdout(f['url'] + f.get('play_path', ''))
+ else:
+ # For RTMP URLs, also include the playpath
+ self.to_stdout(info_dict['url'] + info_dict.get('play_path', ''))
if self.params.get('forcethumbnail', False) and info_dict.get('thumbnail') is not None:
self.to_stdout(info_dict['thumbnail'])
if self.params.get('forcedescription', False) and info_dict.get('description') is not None:
| --format='bestvideo+bestaudio' --get-url crashes
Many videos are available in 1080p only in DASH formats. Often i prefer to play videos in mpv instead of saving to file, it supports external audio tracks. I guess youtube-dl should return several urls, may be in some special or custom format. Now it just crashes:
```
$ youtube-dl --format='bestvideo+bestaudio' --get-url https://www.youtube.com/watch?v=MjQG1s3Isgg
Traceback (most recent call last):
File "/usr/bin/youtube-dl", line 6, in <module>
youtube_dl.main()
File "/usr/lib64/python2.7/site-packages/youtube_dl/__init__.py", line 847, in main
_real_main(argv)
File "/usr/lib64/python2.7/site-packages/youtube_dl/__init__.py", line 837, in _real_main
retcode = ydl.download(all_urls)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 1039, in download
self.extract_info(url)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 527, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 564, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 819, in process_video_result
self.process_info(new_info)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 860, in process_info
self.to_stdout(info_dict['url'] + info_dict.get('play_path', ''))
KeyError: u'url'
```
```
$ youtube-dl --version
2014.05.05
```
| I don't think that this is supported, youtube-dl doesn't mux DASH files, you have to do it yourself and download both files separately, then use ffmpeg or something similar to mux both files.
@filoozom it's supported (but undocumented), see #1612. youtube-dl will mux the two formats if you request them.
Also for this use case i do not need to mux video and audio, i just need urls, mpv will do the rest.
@jaimeMF Oh, didn't see, that's quite handy. Will try that asap.
| 2014-12-06T21:21:52Z | [] | [] |
Traceback (most recent call last):
File "/usr/bin/youtube-dl", line 6, in <module>
youtube_dl.main()
File "/usr/lib64/python2.7/site-packages/youtube_dl/__init__.py", line 847, in main
_real_main(argv)
File "/usr/lib64/python2.7/site-packages/youtube_dl/__init__.py", line 837, in _real_main
retcode = ydl.download(all_urls)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 1039, in download
self.extract_info(url)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 527, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 564, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 819, in process_video_result
self.process_info(new_info)
File "/usr/lib64/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 860, in process_info
self.to_stdout(info_dict['url'] + info_dict.get('play_path', ''))
KeyError: u'url'
| 18,934 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-4794 | 3a0d2f520a0f95c2f87b1c95049135b10206f97f | diff --git a/youtube_dl/extractor/generic.py b/youtube_dl/extractor/generic.py
--- a/youtube_dl/extractor/generic.py
+++ b/youtube_dl/extractor/generic.py
@@ -498,6 +498,19 @@ class GenericIE(InfoExtractor):
'uploader': 'www.abc.net.au',
'title': 'Game of Thrones with dice - Dungeons and Dragons fantasy role-playing game gets new life - 19/01/2015',
}
+ },
+ # embedded viddler video
+ {
+ 'url': 'http://deadspin.com/i-cant-stop-watching-john-wall-chop-the-nuggets-with-th-1681801597',
+ 'info_dict': {
+ 'id': '4d03aad9',
+ 'ext': 'mp4',
+ 'uploader': 'deadspin',
+ 'title': 'WALL-TO-GORTAT',
+ 'timestamp': 1422285291,
+ 'upload_date': '20150126',
+ },
+ 'add_ie': ['Viddler'],
}
]
@@ -860,6 +873,13 @@ def _playlist_from_matches(matches, getter=None, ie=None):
if mobj is not None:
return self.url_result(mobj.group('url'))
+ # Look for embedded Viddler player
+ mobj = (re.search(r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?viddler\.com/embed/.+?)\1', webpage) or
+ re.search(r'<param[^>]+?value=(["\'])(?P<url>(?:https?:)?//(?:www\.)?viddler\.com/player/.+?)\1', webpage))
+
+ if mobj is not None:
+ return self.url_result(mobj.group('url'))
+
# Look for Ooyala videos
mobj = (re.search(r'player.ooyala.com/[^"?]+\?[^"]*?(?:embedCode|ec)=(?P<ec>[^"&]+)', webpage) or
re.search(r'OO.Player.create\([\'"].*?[\'"],\s*[\'"](?P<ec>.{32})[\'"]', webpage))
diff --git a/youtube_dl/extractor/viddler.py b/youtube_dl/extractor/viddler.py
--- a/youtube_dl/extractor/viddler.py
+++ b/youtube_dl/extractor/viddler.py
@@ -5,11 +5,14 @@
float_or_none,
int_or_none,
)
+from ..compat import (
+ compat_urllib_request
+)
class ViddlerIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?viddler\.com/(?:v|embed|player)/(?P<id>[a-z0-9]+)'
- _TEST = {
+ _TESTS = [{
"url": "http://www.viddler.com/v/43903784",
'md5': 'ae43ad7cb59431ce043f0ff7fa13cbf4',
'info_dict': {
@@ -25,7 +28,30 @@ class ViddlerIE(InfoExtractor):
'view_count': int,
'categories': ['video content', 'high quality video', 'video made easy', 'how to produce video with limited resources', 'viddler'],
}
- }
+ }, {
+ "url": "http://www.viddler.com/v/4d03aad9/",
+ "file": "4d03aad9.mp4",
+ "md5": "faa71fbf70c0bee7ab93076fd007f4b0",
+ "info_dict": {
+ 'upload_date': '20150126',
+ 'uploader': 'deadspin',
+ 'id': '4d03aad9',
+ 'timestamp': 1422285291,
+ 'title': 'WALL-TO-GORTAT',
+ }
+ }, {
+ "url": "http://www.viddler.com/player/221ebbbd/0/",
+ "file": "221ebbbd.mp4",
+ "md5": "0defa2bd0ea613d14a6e9bd1db6be326",
+ "info_dict": {
+ 'upload_date': '20140929',
+ 'uploader': 'BCLETeens',
+ 'id': '221ebbbd',
+ 'timestamp': 1411997190,
+ 'title': 'LETeens-Grammar-snack-third-conditional',
+ 'description': ' '
+ }
+ }]
def _real_extract(self, url):
video_id = self._match_id(url)
@@ -33,7 +59,9 @@ def _real_extract(self, url):
json_url = (
'http://api.viddler.com/api/v2/viddler.videos.getPlaybackDetails.json?video_id=%s&key=v0vhrt7bg2xq1vyxhkct' %
video_id)
- data = self._download_json(json_url, video_id)['video']
+ headers = {'Referer': 'http://static.cdn-ec.viddler.com/js/arpeggio/v2/embed.html'}
+ request = compat_urllib_request.Request(json_url, None, headers)
+ data = self._download_json(request, video_id)['video']
formats = []
for filed in data['files']:
@@ -53,7 +81,7 @@ def _real_extract(self, url):
if filed.get('cdn_url'):
f = f.copy()
- f['url'] = self._proto_relative_url(filed['cdn_url'])
+ f['url'] = self._proto_relative_url(filed['cdn_url'], 'http:')
f['format_id'] = filed['profile_id'] + '-cdn'
f['source_preference'] = 1
formats.append(f)
| Viddler sample failed
<pre>youtube-dl -v https://learnenglish.britishcouncil.org/en/word-street/big-meal-scene-2-language-focus
[debug] System config: []
[debug] User config: ['--proxy', 'http://localhost:8118']
[debug] Command-line args: ['-v', 'https://learnenglish.britishcouncil.org/en/word-street/big-meal-scene-2-language-focus']
[debug] Encodings: locale 'UTF-8', fs 'UTF-8', out 'UTF-8', pref: 'UTF-8'
[debug] youtube-dl version 2014.02.26
[debug] Python version 2.7.5 - Linux-3.13.3-201.fc20.i686-i686-with-fedora-20-Heisenbug
[debug] Proxy map: {u'http': 'http://localhost:8118', u'https': 'http://localhost:8118'}
[generic] big-meal-scene-2-language-focus: Requesting header
WARNING: Falling back on generic information extractor.
[generic] big-meal-scene-2-language-focus: Downloading webpage
[generic] big-meal-scene-2-language-focus: Extracting information
ERROR: unable to download video data: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 953, in process_info
success = dl(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 929, in dl
return fd.download(name, info)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/common.py", line 290, in download
return self.real_download(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/http.py", line 52, in real_download
data = compat_urllib_request.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
</pre>
| iPad user string needed where error happpens
| 2015-01-28T05:22:34Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 953, in process_info
success = dl(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 929, in dl
return fd.download(name, info)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/common.py", line 290, in download
return self.real_download(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/http.py", line 52, in real_download
data = compat_urllib_request.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
| 18,940 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-6061 | 7d682f0acb44d72a2c1d0d52fae151f3c273874d | diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py
--- a/youtube_dl/extractor/__init__.py
+++ b/youtube_dl/extractor/__init__.py
@@ -209,6 +209,7 @@
from .godtube import GodTubeIE
from .goldenmoustache import GoldenMoustacheIE
from .golem import GolemIE
+from .googledrive import GoogleDriveIE
from .googleplus import GooglePlusIE
from .googlesearch import GoogleSearchIE
from .gorillavid import GorillaVidIE
diff --git a/youtube_dl/extractor/generic.py b/youtube_dl/extractor/generic.py
--- a/youtube_dl/extractor/generic.py
+++ b/youtube_dl/extractor/generic.py
@@ -48,6 +48,7 @@
from .dailymotion import DailymotionCloudIE
from .onionstudios import OnionStudiosIE
from .snagfilms import SnagFilmsEmbedIE
+from .googledrive import GoogleDriveIE
class GenericIE(InfoExtractor):
@@ -1599,6 +1600,11 @@ def _playlist_from_matches(matches, getter=None, ie=None):
if nbc_sports_url:
return self.url_result(nbc_sports_url, 'NBCSportsVPlayer')
+ # Look for Google Drive embeds
+ google_drive_url = GoogleDriveIE._extract_url(webpage)
+ if google_drive_url:
+ return self.url_result(google_drive_url, 'GoogleDrive')
+
# Look for UDN embeds
mobj = re.search(
r'<iframe[^>]+src="(?P<url>%s)"' % UDNEmbedIE._VALID_URL, webpage)
diff --git a/youtube_dl/extractor/googledrive.py b/youtube_dl/extractor/googledrive.py
new file mode 100644
--- /dev/null
+++ b/youtube_dl/extractor/googledrive.py
@@ -0,0 +1,88 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+ ExtractorError,
+ int_or_none,
+)
+
+
+class GoogleDriveIE(InfoExtractor):
+ _VALID_URL = r'https?://(?:(?:docs|drive)\.google\.com/(?:uc\?.*?id=|file/d/)|video\.google\.com/get_player\?.*?docid=)(?P<id>[a-zA-Z0-9_-]{28})'
+ _TEST = {
+ 'url': 'https://drive.google.com/file/d/0ByeS4oOUV-49Zzh4R1J6R09zazQ/edit?pli=1',
+ 'md5': '881f7700aec4f538571fa1e0eed4a7b6',
+ 'info_dict': {
+ 'id': '0ByeS4oOUV-49Zzh4R1J6R09zazQ',
+ 'ext': 'mp4',
+ 'title': 'Big Buck Bunny.mp4',
+ 'duration': 46,
+ }
+ }
+ _FORMATS_EXT = {
+ '5': 'flv',
+ '6': 'flv',
+ '13': '3gp',
+ '17': '3gp',
+ '18': 'mp4',
+ '22': 'mp4',
+ '34': 'flv',
+ '35': 'flv',
+ '36': '3gp',
+ '37': 'mp4',
+ '38': 'mp4',
+ '43': 'webm',
+ '44': 'webm',
+ '45': 'webm',
+ '46': 'webm',
+ '59': 'mp4',
+ }
+
+ @staticmethod
+ def _extract_url(webpage):
+ mobj = re.search(
+ r'<iframe[^>]+src="https?://(?:video\.google\.com/get_player\?.*?docid=|(?:docs|drive)\.google\.com/file/d/)(?P<id>[a-zA-Z0-9_-]{28})',
+ webpage)
+ if mobj:
+ return 'https://drive.google.com/file/d/%s' % mobj.group('id')
+
+ def _real_extract(self, url):
+ video_id = self._match_id(url)
+ webpage = self._download_webpage(
+ 'http://docs.google.com/file/d/%s' % video_id, video_id, encoding='unicode_escape')
+
+ reason = self._search_regex(r'"reason"\s*,\s*"([^"]+)', webpage, 'reason', default=None)
+ if reason:
+ raise ExtractorError(reason)
+
+ title = self._search_regex(r'"title"\s*,\s*"([^"]+)', webpage, 'title')
+ duration = int_or_none(self._search_regex(
+ r'"length_seconds"\s*,\s*"([^"]+)', webpage, 'length seconds', default=None))
+ fmt_stream_map = self._search_regex(
+ r'"fmt_stream_map"\s*,\s*"([^"]+)', webpage, 'fmt stream map').split(',')
+ fmt_list = self._search_regex(r'"fmt_list"\s*,\s*"([^"]+)', webpage, 'fmt_list').split(',')
+
+ formats = []
+ for fmt, fmt_stream in zip(fmt_list, fmt_stream_map):
+ fmt_id, fmt_url = fmt_stream.split('|')
+ resolution = fmt.split('/')[1]
+ width, height = resolution.split('x')
+ formats.append({
+ 'url': fmt_url,
+ 'format_id': fmt_id,
+ 'resolution': resolution,
+ 'width': int_or_none(width),
+ 'height': int_or_none(height),
+ 'ext': self._FORMATS_EXT[fmt_id],
+ })
+ self._sort_formats(formats)
+
+ return {
+ 'id': video_id,
+ 'title': title,
+ 'thumbnail': self._og_search_thumbnail(webpage),
+ 'duration': duration,
+ 'formats': formats,
+ }
| Google Drive Support Request (Similar to YouTube?)
Hi,
Please would you add support for Google Drive video previews. It looks like it may operate similar to YouTube.
$ youtube-dl --verbose 'https://docs.google.com/file/d/0Bwi6Ha03myF1aGVweVBoemdlU3c/preview?pli=1'
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'https://docs.google.com/file/d/0Bwi6Ha03myF1aGVweVBoemdlU3c/preview?pli=1']
[debug] Encodings: locale 'UTF-8', fs 'utf-8', out 'UTF-8', pref: 'UTF-8'
[debug] youtube-dl version 2014.01.22
[debug] Python version 3.3.3 - Linux-3.12.8-1-ARCH-x86_64-with-arch
[debug] Proxy map: {}
[generic] preview?pli=1: Requesting header
WARNING: Falling back on generic information extractor.
[generic] preview?pli=1: Downloading webpage
[generic] preview?pli=1: Extracting information
ERROR: Unsupported URL: https://docs.google.com/file/d/0Bwi6Ha03myF1aGVweVBoemdlU3c/preview?pli=1; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "/usr/lib/python3.3/site-packages/youtube_dl/YoutubeDL.py", line 494, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/common.py", line 156, in extract
return self._real_extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/generic.py", line 353, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
youtube_dl.utils.ExtractorError: Unsupported URL: https://docs.google.com/file/d/0Bwi6Ha03myF1aGVweVBoemdlU3c/preview?pli=1; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
| I second that. Wish Google Drive support.
example URL https://drive.google.com/folderview?id=0B9g_WLKRYa99M3VFQlJrN0dXa3M&usp=sharing
I would also greatly benefit this. youtube-dl supports dropbox, and it's my favorite way of actually being able to download dropbox things.
Users often decide to send me test files or other things related to my development via Google Drive, and it's always a big hassle to guide them through the process of re-uploading it to somewhere that actually works.
It would be really neat if youtube-dl could support Google Drive links natively so this would no longer be a problem.
That said, I think this may be slightly out of scope of youtube-dl, at least in terms of its typical intent as being used for video sharing platforms. Maybe youtube-dl could be generalized, or maybe this would be better off as a separate project?
| 2015-06-24T00:31:13Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.3/site-packages/youtube_dl/YoutubeDL.py", line 494, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/common.py", line 156, in extract
return self._real_extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/generic.py", line 353, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
youtube_dl.utils.ExtractorError: Unsupported URL: https://docs.google.com/file/d/0Bwi6Ha03myF1aGVweVBoemdlU3c/preview?pli=1; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
| 18,958 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-6196 | 41c0d2f8cb22fe34d957bc9b5f9032a9160685ff | diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py
--- a/youtube_dl/extractor/__init__.py
+++ b/youtube_dl/extractor/__init__.py
@@ -460,6 +460,7 @@
from .radiofrance import RadioFranceIE
from .rai import RaiIE
from .rbmaradio import RBMARadioIE
+from .rdsca import RDScaIE
from .redtube import RedTubeIE
from .restudy import RestudyIE
from .reverbnation import ReverbNationIE
diff --git a/youtube_dl/extractor/rdsca.py b/youtube_dl/extractor/rdsca.py
new file mode 100644
--- /dev/null
+++ b/youtube_dl/extractor/rdsca.py
@@ -0,0 +1,50 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+ parse_iso8601,
+ url_basename,
+)
+
+
+class RDScaIE(InfoExtractor):
+ IE_NAME = 'RDS.ca'
+ _VALID_URL = r'http://(?:www\.)?rds\.ca/videos/(?P<id>.*)'
+
+ _TESTS = [{
+ 'url': 'http://www.rds.ca/videos/football/nfl/fowler-jr-prend-la-direction-de-jacksonville-3.1132799',
+ 'info_dict': {
+ "ext": "mp4",
+ "title": "Fowler Jr. prend la direction de Jacksonville",
+ "description": "Dante Fowler Jr. est le troisième choix du repêchage 2015 de la NFL. ",
+ "timestamp": 1430397346,
+ }
+ }]
+
+ def _real_extract(self, url):
+ video_id = url_basename(url)
+
+ webpage = self._download_webpage(url, video_id)
+
+ title = self._search_regex(
+ r'<span itemprop="name"[^>]*>([^\n]*)</span>', webpage, 'video title', default=None)
+ video_url = self._search_regex(
+ r'<span itemprop="contentURL" content="([^"]+)"', webpage, 'video URL')
+ upload_date = parse_iso8601(self._search_regex(
+ r'<span itemprop="uploadDate" content="([^"]+)"', webpage, 'upload date', default=None))
+ description = self._search_regex(
+ r'<span itemprop="description"[^>]*>([^\n]*)</span>', webpage, 'description', default=None)
+ thumbnail = self._search_regex(
+ r'<span itemprop="thumbnailUrl" content="([^"]+)"', webpage, 'upload date', default=None)
+
+ return {
+ 'id': video_id,
+ 'title': title,
+ 'description': description,
+ 'thumbnail': thumbnail,
+ 'timestamp': upload_date,
+ 'formats': [{
+ 'url': video_url,
+ }],
+ }
| Add support for rds.ca
Hi,
Could you add support for http://www.rds.ca ?
Thanks.
Example url:
$ youtube-dl --verbose http://www.rds.ca/vid%C3%A9os/un-voyage-positif-3.877934
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'http://www.rds.ca/vid%C3%A9os/un-voyage-positif-3.877934']
[debug] youtube-dl version 2013.10.17
[debug] Python version 3.3.2 - Linux-3.11.5-1-ARCH-x86_64-with-arch
[debug] Proxy map: {}
WARNING: Falling back on generic information extractor.
[generic] un-voyage-positif-3.877934: Downloading webpage
[generic] un-voyage-positif-3.877934: Extracting information
ERROR: Unsupported URL: http://www.rds.ca/vid%C3%A9os/un-voyage-positif-3.877934; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "/usr/lib/python3.3/site-packages/youtube_dl/YoutubeDL.py", line 353, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/common.py", line 117, in extract
return self._real_extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/generic.py", line 173, in _real_extract
raise ExtractorError(u'Unsupported URL: %s' % url)
youtube_dl.utils.ExtractorError: Unsupported URL: http://www.rds.ca/vid%C3%A9os/un-voyage-positif-3.877934; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
| 2015-07-11T17:08:24Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.3/site-packages/youtube_dl/YoutubeDL.py", line 353, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/common.py", line 117, in extract
return self._real_extract(url)
File "/usr/lib/python3.3/site-packages/youtube_dl/extractor/generic.py", line 173, in _real_extract
raise ExtractorError(u'Unsupported URL: %s' % url)
youtube_dl.utils.ExtractorError: Unsupported URL: http://www.rds.ca/vid%C3%A9os/un-voyage-positif-3.877934; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
| 18,961 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-6731 | 080997b8083fed357061e301ab7f38484b24ded2 | diff --git a/youtube_dl/utils.py b/youtube_dl/utils.py
--- a/youtube_dl/utils.py
+++ b/youtube_dl/utils.py
@@ -587,6 +587,11 @@ def __init__(self, downloaded, expected):
def _create_http_connection(ydl_handler, http_class, is_https, *args, **kwargs):
+ # Working around python 2 bug (see http://bugs.python.org/issue17849) by limiting
+ # expected HTTP responses to meet HTTP/1.0 or later (see also
+ # https://github.com/rg3/youtube-dl/issues/6727)
+ if sys.version_info < (3, 0):
+ kwargs['strict'] = True
hc = http_class(*args, **kwargs)
source_address = ydl_handler._params.get('source_address')
if source_address is not None:
| ERROR: readline() takes exactly 1 argument (2 given)
I get this error message ONLY WHEN http_proxy variable is set.
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'-ciw', u'-f', u'bestaudio', u'--write-all-thumbnails', u'--no-mtime', u'--download-archive', u'.archive.txt', u'--playlist-start', u'1', u'https://www.youtube.com/watch?v=tVAI3voBG1E&list=RDtVAI3voBG1E']
[debug] Encodings: locale UTF-8, fs UTF-8, out None, pref UTF-8
[debug] youtube-dl version 2015.08.28
[debug] Python version 2.7.10 - CYGWIN_NT-6.1-WOW-2.2.0-0.289-5-3-i686-32bit
[debug] exe versions: ffmpeg N-72346-ga838b22, ffprobe N-72346-ga838b22
[debug] Proxy map: {'http': '78.188.251.242:8080', u'https': '78.188.251.242:8080'}
[youtube:playlist] Downloading playlist RDtVAI3voBG1E - add --no-playlist to just download video tVAI3voBG1E
[youtube:playlist] RDtVAI3voBG1E: Downloading Youtube mix
ERROR: readline() takes exactly 1 argument (2 given)
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 655, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 287, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/youtube.py", line 1572, in _real_extract
return self._extract_mix(playlist_id)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/youtube.py", line 1485, in _extract_mix
url, playlist_id, 'Downloading Youtube mix')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 438, in _download_webpage
res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 345, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 326, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1860, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(_args)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 749, in https_open
req, *_kwargs)
File "/usr/lib/python2.7/urllib2.py", line 1194, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers)
File "/usr/lib/python2.7/httplib.py", line 1053, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1093, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1049, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 893, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 855, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 1266, in connect
HTTPConnection.connect(self)
File "/usr/lib/python2.7/httplib.py", line 835, in connect
self._tunnel()
File "/usr/lib/python2.7/httplib.py", line 819, in _tunnel
line = response.fp.readline(_MAXLINE + 1)
TypeError: readline() takes exactly 1 argument (2 given)
| Python 2.7.10
It works fine for me, that line of code is executed without problem. Could you try [with an official cpython build](https://www.python.org/downloads/release/python-2710/)?
I tried with latest python python-2.7.10.msi (md5sum 4ba2c79b103f6003bc4611c837a08208).
Started from CMD with windows python.
Same error:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-ciw', u'-f', u'bestaudio', u'--write-all-thumbnails', u'--no-mtime', u'--download-archive', u'.archive.txt', u'--playlist-start', u'1', u'-a', u'alink.txt', u'--verbose']
[debug] Batch file urls: [u'https://www.youtube.com/watch?v=tVAI3voBG1E&list=RDtVAI3voBG1E']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2015.08.28
[debug] Python version 2.7.10 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-72346-ga838b22, ffprobe N-72346-ga838b22
[debug] Proxy map: {'http': '223.19.230.181:80', u'https': '223.19.230.181:80'}
[youtube:playlist] Downloading playlist RDtVAI3voBG1E - add --no-playlist to just download video tVAI3voBG1E
[youtube:playlist] RDtVAI3voBG1E: Downloading Youtube mix
ERROR: readline() takes exactly 1 argument (2 given)
Traceback (most recent call last):
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\YoutubeDL.py", line 655, in extract_info
ie_result = ie.extract(url)
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\extractor\common.py", line 287, in extract
return self._real_extract(url)
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\extractor\youtube.py", line 1572, in _real_extract
return self._extract_mix(playlist_id)
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\extractor\youtube.py", line 1485, in _extract_mix
url, playlist_id, 'Downloading Youtube mix')
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\extractor\common.py", line 438, in _download_webpage
res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding)
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\extractor\common.py", line 345, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal)
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\extractor\common.py", line 326, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\YoutubeDL.py", line 1860, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "C:\Python27\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "C:\Python27\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(_args)
File "c:\cygwin\usr\local\bin\youtube-dl\youtube_dl\utils.py", line 749, in https_open
req, *_kwargs)
File "C:\Python27\lib\urllib2.py", line 1194, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers)
File "C:\Python27\lib\httplib.py", line 1053, in request
self._send_request(method, url, body, headers)
File "C:\Python27\lib\httplib.py", line 1093, in _send_request
self.endheaders(body)
File "C:\Python27\lib\httplib.py", line 1049, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 893, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 855, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 1266, in connect
HTTPConnection.connect(self)
File "C:\Python27\lib\httplib.py", line 835, in connect
self._tunnel()
File "C:\Python27\lib\httplib.py", line 819, in _tunnel
line = response.fp.readline(_MAXLINE + 1)
TypeError: readline() takes exactly 1 argument (2 given)
I can reproduce this as well. It's a [python issue](http://bugs.python.org/issue17849).
| 2015-09-01T20:17:54Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 655, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 287, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/youtube.py", line 1572, in _real_extract
return self._extract_mix(playlist_id)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/youtube.py", line 1485, in _extract_mix
url, playlist_id, 'Downloading Youtube mix')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 438, in _download_webpage
res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 345, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 326, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1860, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(_args)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 749, in https_open
req, *_kwargs)
File "/usr/lib/python2.7/urllib2.py", line 1194, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers)
File "/usr/lib/python2.7/httplib.py", line 1053, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1093, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1049, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 893, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 855, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 1266, in connect
HTTPConnection.connect(self)
File "/usr/lib/python2.7/httplib.py", line 835, in connect
self._tunnel()
File "/usr/lib/python2.7/httplib.py", line 819, in _tunnel
line = response.fp.readline(_MAXLINE + 1)
TypeError: readline() takes exactly 1 argument (2 given)
| 18,965 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-7045 | 4c24ed94640b148882f1ceb400127b3b3afcafd4 | diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py
--- a/youtube_dl/extractor/__init__.py
+++ b/youtube_dl/extractor/__init__.py
@@ -231,7 +231,11 @@
from .huffpost import HuffPostIE
from .hypem import HypemIE
from .iconosquare import IconosquareIE
-from .ign import IGNIE, OneUPIE
+from .ign import (
+ IGNIE,
+ OneUPIE,
+ PCMagIE,
+)
from .imdb import (
ImdbIE,
ImdbListIE
diff --git a/youtube_dl/extractor/ign.py b/youtube_dl/extractor/ign.py
--- a/youtube_dl/extractor/ign.py
+++ b/youtube_dl/extractor/ign.py
@@ -3,6 +3,10 @@
import re
from .common import InfoExtractor
+from ..utils import (
+ int_or_none,
+ parse_iso8601,
+)
class IGNIE(InfoExtractor):
@@ -11,25 +15,24 @@ class IGNIE(InfoExtractor):
Some videos of it.ign.com are also supported
"""
- _VALID_URL = r'https?://.+?\.ign\.com/(?P<type>videos|show_videos|articles|(?:[^/]*/feature))(/.+)?/(?P<name_or_id>.+)'
+ _VALID_URL = r'https?://.+?\.ign\.com/(?:[^/]+/)?(?P<type>videos|show_videos|articles|feature|(?:[^/]+/\d+/video))(/.+)?/(?P<name_or_id>.+)'
IE_NAME = 'ign.com'
- _CONFIG_URL_TEMPLATE = 'http://www.ign.com/videos/configs/id/%s.config'
- _DESCRIPTION_RE = [
- r'<span class="page-object-description">(.+?)</span>',
- r'id="my_show_video">.*?<p>(.*?)</p>',
- r'<meta name="description" content="(.*?)"',
- ]
+ _API_URL_TEMPLATE = 'http://apis.ign.com/video/v3/videos/%s'
+ _EMBED_RE = r'<iframe[^>]+?["\']((?:https?:)?//.+?\.ign\.com.+?/embed.+?)["\']'
_TESTS = [
{
'url': 'http://www.ign.com/videos/2013/06/05/the-last-of-us-review',
- 'md5': 'eac8bdc1890980122c3b66f14bdd02e9',
+ 'md5': 'febda82c4bafecd2d44b6e1a18a595f8',
'info_dict': {
'id': '8f862beef863986b2785559b9e1aa599',
'ext': 'mp4',
'title': 'The Last of Us Review',
'description': 'md5:c8946d4260a4d43a00d5ae8ed998870c',
+ 'timestamp': 1370440800,
+ 'upload_date': '20130605',
+ 'uploader_id': 'cberidon@ign.com',
}
},
{
@@ -44,6 +47,9 @@ class IGNIE(InfoExtractor):
'ext': 'mp4',
'title': 'GTA 5 Video Review',
'description': 'Rockstar drops the mic on this generation of games. Watch our review of the masterly Grand Theft Auto V.',
+ 'timestamp': 1379339880,
+ 'upload_date': '20130916',
+ 'uploader_id': 'danieljkrupa@gmail.com',
},
},
{
@@ -52,6 +58,9 @@ class IGNIE(InfoExtractor):
'ext': 'mp4',
'title': '26 Twisted Moments from GTA 5 in Slow Motion',
'description': 'The twisted beauty of GTA 5 in stunning slow motion.',
+ 'timestamp': 1386878820,
+ 'upload_date': '20131212',
+ 'uploader_id': 'togilvie@ign.com',
},
},
],
@@ -66,12 +75,20 @@ class IGNIE(InfoExtractor):
'id': '078fdd005f6d3c02f63d795faa1b984f',
'ext': 'mp4',
'title': 'Rewind Theater - Wild Trailer Gamescom 2014',
- 'description': (
- 'Giant skeletons, bloody hunts, and captivating'
- ' natural beauty take our breath away.'
- ),
+ 'description': 'Brian and Jared explore Michel Ancel\'s captivating new preview.',
+ 'timestamp': 1408047180,
+ 'upload_date': '20140814',
+ 'uploader_id': 'jamesduggan1990@gmail.com',
},
},
+ {
+ 'url': 'http://me.ign.com/en/videos/112203/video/how-hitman-aims-to-be-different-than-every-other-s',
+ 'only_matching': True,
+ },
+ {
+ 'url': 'http://me.ign.com/ar/angry-birds-2/106533/video/lrd-ldyy-lwl-lfylm-angry-birds',
+ 'only_matching': True,
+ },
]
def _find_video_id(self, webpage):
@@ -82,7 +99,7 @@ def _find_video_id(self, webpage):
r'<object id="vid_(.+?)"',
r'<meta name="og:image" content=".*/(.+?)-(.+?)/.+.jpg"',
]
- return self._search_regex(res_id, webpage, 'video id')
+ return self._search_regex(res_id, webpage, 'video id', default=None)
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
@@ -91,8 +108,8 @@ def _real_extract(self, url):
webpage = self._download_webpage(url, name_or_id)
if page_type != 'video':
multiple_urls = re.findall(
- '<param name="flashvars"[^>]*value="[^"]*?url=(https?://www\.ign\.com/videos/.*?)["&]',
- webpage)
+ r'<param name="flashvars"[^>]*value="[^"]*?url=(https?://www\.ign\.com/videos/.*?)["&]',
+ webpage)
if multiple_urls:
entries = [self.url_result(u, ie='IGN') for u in multiple_urls]
return {
@@ -102,22 +119,50 @@ def _real_extract(self, url):
}
video_id = self._find_video_id(webpage)
- result = self._get_video_info(video_id)
- description = self._html_search_regex(self._DESCRIPTION_RE,
- webpage, 'video description', flags=re.DOTALL)
- result['description'] = description
- return result
+ if not video_id:
+ return self.url_result(self._search_regex(self._EMBED_RE, webpage, 'embed url'))
+ return self._get_video_info(video_id)
def _get_video_info(self, video_id):
- config_url = self._CONFIG_URL_TEMPLATE % video_id
- config = self._download_json(config_url, video_id)
- media = config['playlist']['media']
+ api_data = self._download_json(self._API_URL_TEMPLATE % video_id, video_id)
+
+ formats = []
+ m3u8_url = api_data['refs'].get('m3uUrl')
+ if m3u8_url:
+ m3u8_formats = self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', 'm3u8_native', m3u8_id='hls', fatal=False)
+ if m3u8_formats:
+ formats.extend(m3u8_formats)
+ f4m_url = api_data['refs'].get('f4mUrl')
+ if f4m_url:
+ f4m_formats = self._extract_f4m_formats(f4m_url, video_id, f4m_id='hds', fatal=False)
+ if f4m_formats:
+ formats.extend(f4m_formats)
+ for asset in api_data['assets']:
+ formats.append({
+ 'url': asset['url'],
+ 'tbr': asset.get('actual_bitrate_kbps'),
+ 'fps': asset.get('frame_rate'),
+ 'height': int_or_none(asset.get('height')),
+ 'width': int_or_none(asset.get('width')),
+ })
+ self._sort_formats(formats)
+
+ thumbnails = [{
+ 'url': thumbnail['url']
+ } for thumbnail in api_data.get('thumbnails', [])]
+
+ metadata = api_data['metadata']
return {
- 'id': media['metadata']['videoId'],
- 'url': media['url'],
- 'title': media['metadata']['title'],
- 'thumbnail': media['poster'][0]['url'].replace('{size}', 'grande'),
+ 'id': api_data.get('videoId') or video_id,
+ 'title': metadata.get('longTitle') or metadata.get('name') or metadata.get['title'],
+ 'description': metadata.get('description'),
+ 'timestamp': parse_iso8601(metadata.get('publishDate')),
+ 'duration': int_or_none(metadata.get('duration')),
+ 'display_id': metadata.get('slug') or video_id,
+ 'uploader_id': metadata.get('creator'),
+ 'thumbnails': thumbnails,
+ 'formats': formats,
}
@@ -125,16 +170,17 @@ class OneUPIE(IGNIE):
_VALID_URL = r'https?://gamevideos\.1up\.com/(?P<type>video)/id/(?P<name_or_id>.+)\.html'
IE_NAME = '1up.com'
- _DESCRIPTION_RE = r'<div id="vid_summary">(.+?)</div>'
-
_TESTS = [{
'url': 'http://gamevideos.1up.com/video/id/34976.html',
- 'md5': '68a54ce4ebc772e4b71e3123d413163d',
+ 'md5': 'c9cc69e07acb675c31a16719f909e347',
'info_dict': {
'id': '34976',
'ext': 'mp4',
'title': 'Sniper Elite V2 - Trailer',
- 'description': 'md5:5d289b722f5a6d940ca3136e9dae89cf',
+ 'description': 'md5:bf0516c5ee32a3217aa703e9b1bc7826',
+ 'timestamp': 1313099220,
+ 'upload_date': '20110811',
+ 'uploader_id': 'IGN',
}
}]
@@ -143,3 +189,36 @@ def _real_extract(self, url):
result = super(OneUPIE, self)._real_extract(url)
result['id'] = mobj.group('name_or_id')
return result
+
+
+class PCMagIE(IGNIE):
+ _VALID_URL = r'https?://(?:www\.)?pcmag\.com/(?P<type>videos|article2)(/.+)?/(?P<name_or_id>.+)'
+ IE_NAME = 'pcmag'
+
+ _EMBED_RE = r'iframe.setAttribute\("src",\s*__util.objToUrlString\("http://widgets\.ign\.com/video/embed/content.html?[^"]*url=([^"]+)["&]'
+
+ _TESTS = [{
+ 'url': 'http://www.pcmag.com/videos/2015/01/06/010615-whats-new-now-is-gogo-snooping-on-your-data',
+ 'md5': '212d6154fd0361a2781075f1febbe9ad',
+ 'info_dict': {
+ 'id': 'ee10d774b508c9b8ec07e763b9125b91',
+ 'ext': 'mp4',
+ 'title': '010615_What\'s New Now: Is GoGo Snooping on Your Data?',
+ 'description': 'md5:a7071ae64d2f68cc821c729d4ded6bb3',
+ 'timestamp': 1420571160,
+ 'upload_date': '20150106',
+ 'uploader_id': 'cozzipix@gmail.com',
+ }
+ },{
+ 'url': 'http://www.pcmag.com/article2/0,2817,2470156,00.asp',
+ 'md5': '94130c1ca07ba0adb6088350681f16c1',
+ 'info_dict': {
+ 'id': '042e560ba94823d43afcb12ddf7142ca',
+ 'ext': 'mp4',
+ 'title': 'HTC\'s Weird New Re Camera - What\'s New Now',
+ 'description': 'md5:53433c45df96d2ea5d0fda18be2ca908',
+ 'timestamp': 1412953920,
+ 'upload_date': '20141010',
+ 'uploader_id': 'chris_snyder@pcmag.com',
+ }
+ }]
| IGN - Downloading video from incorrect cdn
I am trying to extract video from this [LINK](http://www.pcmag.com/videos/2015/09/22/092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens). I am in north america, Youtube-dl in this case seems to be downloading from this cdn https://a248.e.akamai.net/assets2.ign.com/videos/zencoder/2015/9/21/640/fb03ab22da0f33c53e711a46433ccd17-500000-1442868763.mp4 which returns a 400 error. After visiting the original [LINK](http://www.pcmag.com/videos/2015/09/22/092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens) and checking the network calls, the mp4 file is found here
http://assets2.ign.com/videos/zencoder/2015/9/21/640/fb03ab22da0f33c53e711a46433ccd17-500000-1442868763.mp4
It seems that difference is in the beginning of the url `[0]https://a248.e.akamai.net/assets2.ign.com` vs `[1]http://assets2.ign.com`
```
youtube-dl -v --no-warnings -o "/media/%(title)s.%(ext)s" "http://www.pcmag.com/videos/ 2015/09/22/092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens"
[debug] System config: [u'--ffmpeg-location', u'/home/mimi/bin/ffmpeg']
[debug] User config: []
[debug] Command-line args: [u'-v', u'--no-warnings', u'-o', u'/media/%(title)s.%(ext)s', u'http://www.p cmag.com/videos/2015/09/22/092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.09.22
[debug] Python version 2.7.9 - Linux-3.19.0-28-generic-x86_64-with-Ubuntu-15.04-vivid
[debug] exe versions: ffmpeg 2.6.git, ffprobe 2.6.git, rtmpdump 2.4
[debug] Proxy map: {}
[generic] 092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens: Requesting header
[generic] 092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens: Downloading webpage
[generic] 092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens: Extracting information
[debug] Invoking downloader on u'https://a248.e.akamai.net/assets2.ign.com/videos/zencoder/2015/9/21/640/fb03ab22da0f33c53e711a46433ccd17-500000-1442868763.mp4'
ERROR: unable to download video data: HTTP Error 400: Bad Request
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1590, in process_info
success = dl(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1532, in dl
return fd.download(name, info)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/common.py", line 342, in download
return self.real_download(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/http.py", line 60, in real_download
data = self.ydl.urlopen(request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1865, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 400: Bad Request
```
| The generic extractors extract the link from the `twitter:player:stream` meta tag:
```
<meta property="twitter:player:stream" content="https://a248.e.akamai.net/assets2.ign.com/videos/zencoder/2015/9/21/640/fb03ab22da0f33c53e711a46433ccd17-500000-1442868763.mp4" />
```
In my case the IGN link (http://www.ign.com/videos/2015/09/22/092115-microsoft-office-chief-spitballs-about-3d-office-on-the-hololens) works, although I can't watch them in the browser.
| 2015-10-02T21:04:14Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1590, in process_info
success = dl(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1532, in dl
return fd.download(name, info)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/common.py", line 342, in download
return self.real_download(filename, info_dict)
File "/usr/local/bin/youtube-dl/youtube_dl/downloader/http.py", line 60, in real_download
data = self.ydl.urlopen(request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1865, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 400: Bad Request
| 18,968 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-7599 | 1b38185361e096d6e34db11adac7333ac9dadca0 | diff --git a/youtube_dl/extractor/youtube.py b/youtube_dl/extractor/youtube.py
--- a/youtube_dl/extractor/youtube.py
+++ b/youtube_dl/extractor/youtube.py
@@ -674,7 +674,23 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
{
'url': 'http://vid.plus/FlRa-iH7PGw',
'only_matching': True,
- }
+ },
+ {
+ # Title with JS-like syntax "};"
+ 'url': 'https://www.youtube.com/watch?v=lsguqyKfVQg',
+ 'info_dict': {
+ 'id': 'lsguqyKfVQg',
+ 'ext': 'mp4',
+ 'title': '{dark walk}; Loki/AC/Dishonored; collab w/Elflover21',
+ 'description': 'md5:8085699c11dc3f597ce0410b0dcbb34a',
+ 'upload_date': '20151119',
+ 'uploader_id': 'IronSoulElf',
+ 'uploader': 'IronSoulElf',
+ },
+ 'params': {
+ 'skip_download': True,
+ },
+ },
]
def __init__(self, *args, **kwargs):
@@ -858,16 +874,24 @@ def _get_subtitles(self, video_id, webpage):
return {}
return sub_lang_list
+ def _get_ytplayer_config(self, webpage):
+ patterns = [
+ r';ytplayer\.config\s*=\s*({.*?});ytplayer',
+ r';ytplayer\.config\s*=\s*({.*?});',
+ ]
+ config = self._search_regex(patterns, webpage, 'ytconfig.player', default=None)
+ if config is not None:
+ return json.loads(uppercase_escape(config))
+
def _get_automatic_captions(self, video_id, webpage):
"""We need the webpage for getting the captions url, pass it as an
argument to speed up the process."""
self.to_screen('%s: Looking for automatic captions' % video_id)
- mobj = re.search(r';ytplayer.config = ({.*?});', webpage)
+ player_config = self._get_ytplayer_config(webpage)
err_msg = 'Couldn\'t find automatic captions for %s' % video_id
- if mobj is None:
+ if player_config is None:
self._downloader.report_warning(err_msg)
return {}
- player_config = json.loads(mobj.group(1))
try:
args = player_config['args']
caption_url = args['ttsurl']
@@ -1074,10 +1098,8 @@ def add_dash_mpd(video_info):
age_gate = False
video_info = None
# Try looking directly into the video webpage
- mobj = re.search(r';ytplayer\.config\s*=\s*({.*?});', video_webpage)
- if mobj:
- json_code = uppercase_escape(mobj.group(1))
- ytplayer_config = json.loads(json_code)
+ ytplayer_config = self._get_ytplayer_config(video_webpage)
+ if ytplayer_config is not None:
args = ytplayer_config['args']
if args.get('url_encoded_fmt_stream_map'):
# Convert to the same format returned by compat_parse_qs
| ValueError when downloading a video from YouTube
Downloading [this video](https://www.youtube.com/watch?v=Ms7iBXnlUO8) seems to fail:
```
$ PYTHONPATH=`pwd` ./bin/youtube-dl --verbose --list-formats 'https://www.youtube.com/watch?v=Ms7iBXnlUO8'
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'--list-formats', u'https://www.youtube.com/watch?v=Ms7iBXnlUO8']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.11.10
[debug] Git HEAD: fcd817a
[debug] Python version 2.7.9 - Linux-3.19.0-33-generic-x86_64-with-Ubuntu-15.04-vivid
[debug] exe versions: ffmpeg 2.5.8-0ubuntu0.15.04.1, ffprobe 2.5.8-0ubuntu0.15.04.1, rtmpdump 2.4
[debug] Proxy map: {}
[youtube] Ms7iBXnlUO8: Downloading webpage
Traceback (most recent call last):
File "./bin/youtube-dl", line 6, in <module>
youtube_dl.main()
File "/home/lukas/work/youtube-dl/youtube_dl/__init__.py", line 410, in main
_real_main(argv)
File "/home/lukas/work/youtube-dl/youtube_dl/__init__.py", line 400, in _real_main
retcode = ydl.download(all_urls)
File "/home/lukas/work/youtube-dl/youtube_dl/YoutubeDL.py", line 1666, in download
url, force_generic_extractor=self.params.get('force_generic_extractor', False))
File "/home/lukas/work/youtube-dl/youtube_dl/YoutubeDL.py", line 661, in extract_info
ie_result = ie.extract(url)
File "/home/lukas/work/youtube-dl/youtube_dl/extractor/common.py", line 290, in extract
return self._real_extract(url)
File "/home/lukas/work/youtube-dl/youtube_dl/extractor/youtube.py", line 1080, in _real_extract
ytplayer_config = json.loads(json_code)
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 1 column 6498 (char 6497)
```
| The problem is that the video has keywords set to `"info: {}});}, n);}};(function(w, startTick"` (really) and the current regex cuts it in the middle. One possible fix would be to change the regex:
``` diff
diff --git a/youtube_dl/extractor/youtube.py b/youtube_dl/extractor/youtube.py
index 687e0b4..2173118 100644
--- a/youtube_dl/extractor/youtube.py
+++ b/youtube_dl/extractor/youtube.py
@@ -1074,7 +1074,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
age_gate = False
video_info = None
# Try looking directly into the video webpage
- mobj = re.search(r';ytplayer\.config\s*=\s*({.*?});', video_webpage)
+ mobj = re.search(r';ytplayer\.config\s*=\s*({.*?});ytplayer', video_webpage)
if mobj:
json_code = uppercase_escape(mobj.group(1))
ytplayer_config = json.loads(json_code)
```
Another video affected by this - https://www.youtube.com/watch?v=lsguqyKfVQg
| 2015-11-22T12:15:02Z | [] | [] |
Traceback (most recent call last):
File "./bin/youtube-dl", line 6, in <module>
youtube_dl.main()
File "/home/lukas/work/youtube-dl/youtube_dl/__init__.py", line 410, in main
_real_main(argv)
File "/home/lukas/work/youtube-dl/youtube_dl/__init__.py", line 400, in _real_main
retcode = ydl.download(all_urls)
File "/home/lukas/work/youtube-dl/youtube_dl/YoutubeDL.py", line 1666, in download
url, force_generic_extractor=self.params.get('force_generic_extractor', False))
File "/home/lukas/work/youtube-dl/youtube_dl/YoutubeDL.py", line 661, in extract_info
ie_result = ie.extract(url)
File "/home/lukas/work/youtube-dl/youtube_dl/extractor/common.py", line 290, in extract
return self._real_extract(url)
File "/home/lukas/work/youtube-dl/youtube_dl/extractor/youtube.py", line 1080, in _real_extract
ytplayer_config = json.loads(json_code)
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 1 column 6498 (char 6497)
| 18,979 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-764 | fbbdf475b1a534389585d696db5e6c8b3fd212fb | diff --git a/youtube_dl/FileDownloader.py b/youtube_dl/FileDownloader.py
--- a/youtube_dl/FileDownloader.py
+++ b/youtube_dl/FileDownloader.py
@@ -485,14 +485,17 @@ def process_info(self, info_dict):
subtitle = info_dict['subtitles'][0]
(sub_error, sub_lang, sub) = subtitle
sub_format = self.params.get('subtitlesformat')
- try:
- sub_filename = filename.rsplit('.', 1)[0] + u'.' + sub_lang + u'.' + sub_format
- self.report_writesubtitles(sub_filename)
- with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile:
- subfile.write(sub)
- except (OSError, IOError):
- self.report_error(u'Cannot write subtitles file ' + descfn)
- return
+ if sub_error:
+ self.report_warning("Some error while getting the subtitles")
+ else:
+ try:
+ sub_filename = filename.rsplit('.', 1)[0] + u'.' + sub_lang + u'.' + sub_format
+ self.report_writesubtitles(sub_filename)
+ with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile:
+ subfile.write(sub)
+ except (OSError, IOError):
+ self.report_error(u'Cannot write subtitles file ' + descfn)
+ return
if self.params.get('onlysubtitles', False):
return
@@ -501,14 +504,17 @@ def process_info(self, info_dict):
sub_format = self.params.get('subtitlesformat')
for subtitle in subtitles:
(sub_error, sub_lang, sub) = subtitle
- try:
- sub_filename = filename.rsplit('.', 1)[0] + u'.' + sub_lang + u'.' + sub_format
- self.report_writesubtitles(sub_filename)
- with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile:
- subfile.write(sub)
- except (OSError, IOError):
- self.trouble(u'ERROR: Cannot write subtitles file ' + descfn)
- return
+ if sub_error:
+ self.report_warning("Some error while getting the subtitles")
+ else:
+ try:
+ sub_filename = filename.rsplit('.', 1)[0] + u'.' + sub_lang + u'.' + sub_format
+ self.report_writesubtitles(sub_filename)
+ with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile:
+ subfile.write(sub)
+ except (OSError, IOError):
+ self.trouble(u'ERROR: Cannot write subtitles file ' + descfn)
+ return
if self.params.get('onlysubtitles', False):
return
diff --git a/youtube_dl/InfoExtractors.py b/youtube_dl/InfoExtractors.py
--- a/youtube_dl/InfoExtractors.py
+++ b/youtube_dl/InfoExtractors.py
@@ -253,11 +253,11 @@ def _get_available_subtitles(self, video_id):
try:
sub_list = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
- return (u'WARNING: unable to download video subtitles: %s' % compat_str(err), None)
+ return (u'unable to download video subtitles: %s' % compat_str(err), None)
sub_lang_list = re.findall(r'name="([^"]*)"[^>]+lang_code="([\w\-]+)"', sub_list)
sub_lang_list = dict((l[1], l[0]) for l in sub_lang_list)
if not sub_lang_list:
- return (u'WARNING: video doesn\'t have subtitles', None)
+ return (u'video doesn\'t have subtitles', None)
return sub_lang_list
def _list_available_subtitles(self, video_id):
@@ -265,6 +265,10 @@ def _list_available_subtitles(self, video_id):
self.report_video_subtitles_available(video_id, sub_lang_list)
def _request_subtitle(self, sub_lang, sub_name, video_id, format):
+ """
+ Return tuple:
+ (error_message, sub_lang, sub)
+ """
self.report_video_subtitles_request(video_id, sub_lang, format)
params = compat_urllib_parse.urlencode({
'lang': sub_lang,
@@ -276,14 +280,20 @@ def _request_subtitle(self, sub_lang, sub_name, video_id, format):
try:
sub = compat_urllib_request.urlopen(url).read().decode('utf-8')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
- return (u'WARNING: unable to download video subtitles: %s' % compat_str(err), None)
+ return (u'unable to download video subtitles: %s' % compat_str(err), None, None)
if not sub:
- return (u'WARNING: Did not fetch video subtitles', None)
+ return (u'Did not fetch video subtitles', None, None)
return (None, sub_lang, sub)
def _extract_subtitle(self, video_id):
+ """
+ Return a list with a tuple:
+ [(error_message, sub_lang, sub)]
+ """
sub_lang_list = self._get_available_subtitles(video_id)
sub_format = self._downloader.params.get('subtitlesformat')
+ if isinstance(sub_lang_list,tuple): #There was some error, it didn't get the available subtitles
+ return [(sub_lang_list[0], None, None)]
if self._downloader.params.get('subtitleslang', False):
sub_lang = self._downloader.params.get('subtitleslang')
elif 'en' in sub_lang_list:
@@ -291,7 +301,7 @@ def _extract_subtitle(self, video_id):
else:
sub_lang = list(sub_lang_list.keys())[0]
if not sub_lang in sub_lang_list:
- return (u'WARNING: no closed captions found in the specified language "%s"' % sub_lang, None)
+ return [(u'no closed captions found in the specified language "%s"' % sub_lang, None, None)]
subtitle = self._request_subtitle(sub_lang, sub_lang_list[sub_lang].encode('utf-8'), video_id, sub_format)
return [subtitle]
@@ -299,6 +309,8 @@ def _extract_subtitle(self, video_id):
def _extract_all_subtitles(self, video_id):
sub_lang_list = self._get_available_subtitles(video_id)
sub_format = self._downloader.params.get('subtitlesformat')
+ if isinstance(sub_lang_list,tuple): #There was some error, it didn't get the available subtitles
+ return [(sub_lang_list[0], None, None)]
subtitles = []
for sub_lang in sub_lang_list:
subtitle = self._request_subtitle(sub_lang, sub_lang_list[sub_lang].encode('utf-8'), video_id, sub_format)
@@ -532,14 +544,14 @@ def _real_extract(self, url):
if video_subtitles:
(sub_error, sub_lang, sub) = video_subtitles[0]
if sub_error:
- self._downloader.trouble(sub_error)
+ self._downloader.report_error(sub_error)
if self._downloader.params.get('allsubtitles', False):
video_subtitles = self._extract_all_subtitles(video_id)
for video_subtitle in video_subtitles:
(sub_error, sub_lang, sub) = video_subtitle
if sub_error:
- self._downloader.trouble(sub_error)
+ self._downloader.report_error(sub_error)
if self._downloader.params.get('listsubtitles', False):
sub_lang_list = self._list_available_subtitles(video_id)
| --write-sub causes AttributeError: 'tuple' object has no attribute 'keys'
```
$ git rev-parse HEAD
f10b2a9c14db686e7f9b7d050f41b26d5cc35e01
$ python -m youtube_dl --write-sub "https://www.youtube.com/watch?v=sZtfnC2CyU0"
[youtube] Setting language
[youtube] sZtfnC2CyU0: Downloading video webpage
[youtube] sZtfnC2CyU0: Downloading video info webpage
[youtube] sZtfnC2CyU0: Extracting video information
[youtube] sZtfnC2CyU0: Checking available subtitles
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/at/trash/youtube-dl/youtube_dl/__main__.py", line 17, in <module>
youtube_dl.main()
File "youtube_dl/__init__.py", line 540, in main
_real_main()
File "youtube_dl/__init__.py", line 524, in _real_main
retcode = fd.download(all_urls)
File "youtube_dl/FileDownloader.py", line 547, in download
videos = ie.extract(url)
File "youtube_dl/InfoExtractors.py", line 96, in extract
return self._real_extract(url)
File "youtube_dl/InfoExtractors.py", line 531, in _real_extract
video_subtitles = self._extract_subtitle(video_id)
File "youtube_dl/InfoExtractors.py", line 292, in _extract_subtitle
sub_lang = list(sub_lang_list.keys())[0]
AttributeError: 'tuple' object has no attribute 'keys'
```
| The problem is that youtube-dl doesn't find subtitles and `_get_available_subtitles` returns a tuple with the warning message
| 2013-03-30T13:15:47Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/at/trash/youtube-dl/youtube_dl/__main__.py", line 17, in <module>
youtube_dl.main()
File "youtube_dl/__init__.py", line 540, in main
_real_main()
File "youtube_dl/__init__.py", line 524, in _real_main
retcode = fd.download(all_urls)
File "youtube_dl/FileDownloader.py", line 547, in download
videos = ie.extract(url)
File "youtube_dl/InfoExtractors.py", line 96, in extract
return self._real_extract(url)
File "youtube_dl/InfoExtractors.py", line 531, in _real_extract
video_subtitles = self._extract_subtitle(video_id)
File "youtube_dl/InfoExtractors.py", line 292, in _extract_subtitle
sub_lang = list(sub_lang_list.keys())[0]
AttributeError: 'tuple' object has no attribute 'keys'
| 18,980 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-8354 | 1ac6e794cb36af612db97007006fc7cf1468e049 | diff --git a/youtube_dl/postprocessor/ffmpeg.py b/youtube_dl/postprocessor/ffmpeg.py
--- a/youtube_dl/postprocessor/ffmpeg.py
+++ b/youtube_dl/postprocessor/ffmpeg.py
@@ -391,6 +391,10 @@ def run(self, info):
for (name, value) in metadata.items():
options.extend(['-metadata', '%s=%s' % (name, value)])
+ # https://github.com/rg3/youtube-dl/issues/8350
+ if info['protocol'] == 'm3u8_native':
+ options.extend(['-bsf:a', 'aac_adtstoasc'])
+
self._downloader.to_screen('[ffmpeg] Adding metadata to \'%s\'' % filename)
self.run_ffmpeg(filename, temp_filename, options)
os.remove(encodeFilename(filename))
| Vimeo: Chokes on --add-metadata
```
youtube-dl --ignore-config --verbose --download-archive ~/.ytdlarchive --no-overwrites --call-home --continue --write-info-json --write-description --write-thumbnail --merge-output-format mkv--all-subs --sub-format srt --convert-subs srt --write-sub --add-metadata https://vimeo.com/70668043 https://vimeo.com/70666333
```
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--ignore-config', u'--verbose', u'--download-archive', u'/home/vxbinaca/.ytdlarchive', u'--no-overwrites', u'--call-home', u'--continue', u'--write-info-json', u'--write-description', u'--write-thumbnail', u'--merge-output-format', u'mkv--all-subs', u'--sub-format', u'srt', u'--convert-subs', u'srt', u'--write-sub', u'--add-metadata', u'https://vimeo.com/70668043', u'https://vimeo.com/70666333']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2016.01.27
[debug] Python version 2.7.10 - Linux-4.2.0-25-generic-x86_64-with-Ubuntu-15.10-wily
[debug] exe versions: ffmpeg 2.7.5-0ubuntu0.15.10.1, ffprobe 2.7.5-0ubuntu0.15.10.1, rtmpdump 2.4
[debug] Proxy map: {}
[debug] Public IP address: 76.101.221.232
[vimeo] 70668043: Downloading webpage
[vimeo] 70668043: Extracting information
[vimeo] 70668043: Downloading webpage
[vimeo] 70668043: Downloading JSON metadata
[vimeo] 70668043: Downloading m3u8 information
[info] Video description is already present
[info] Video description metadata is already present
[vimeo] 70668043: Thumbnail is already present
[debug] Invoking downloader on u'https://10-lvl3-hls.vimeocdn.com/1453990861-28223b02a7d6053983227f4b64333f85d0240957/01/4133/2/70668043/178317076.mp4.m3u8'
[download] Ask Ash No. 1-70668043.mp4 has already been downloaded
[download] 100% of 9.83MiB
[ffmpeg] Adding metadata to 'Ask Ash No. 1-70668043.mp4'
[debug] ffmpeg command line: ffmpeg -y -i 'file:Ask Ash No. 1-70668043.mp4' -c copy -metadata 'comment=More | junnnktank.com/thenakedissue/faq
f. Ash twitter.com/ashvandeesch
This is Ash. She'"'"'s from Holland. She'"'"'s a regular {and fucking awesome} contributor to The Naked Issue. You ask her questions, she makes a video and answers them {while looking pretty damn cute}.
Ask Ash | thenakedissue@junnnktank.com' -metadata 'description=More | junnnktank.com/thenakedissue/faq
f. Ash twitter.com/ashvandeesch
This is Ash. She'"'"'s from Holland. She'"'"'s a regular {and fucking awesome} contributor to The Naked Issue. You ask her questions, she makes a video and answers them {while looking pretty damn cute}.
Ask Ash | thenakedissue@junnnktank.com' -metadata artist=JUNNNKTANK -metadata 'title=Ask Ash No. 1' -metadata date=20130719 -metadata purl=https://vimeo.com/70668043 'file:Ask Ash No. 1-70668043.temp.mp4'
ERROR: Conversion failed!
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 1737, in post_process
files_to_delete, info = pp.run(info)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/postprocessor/ffmpeg.py", line 395, in run
self.run_ffmpeg(filename, temp_filename, options)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/postprocessor/ffmpeg.py", line 159, in run_ffmpeg
self.run_ffmpeg_multiple_files([path], out_path, opts)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/postprocessor/ffmpeg.py", line 155, in run_ffmpeg_multiple_files
raise FFmpegPostProcessorError(msg)
FFmpegPostProcessorError
```
| Looks like it's failing on metadata embed, it's choking on `--add-metadata`
Here's the output of the ffmpeg command:
```
$ ffmpeg -y -i 'file:Ask Ash No. 1-70668043.mp4' -c copy -metadata date=20130719 -metadata 'description=More | junnnktank.com/thenakedissue/faq
f. Ash twitter.com/ashvandeesch
This is Ash. She'"'"'s from Holland. She'"'"'s a regular {and fucking awesome} contributor to The Naked Issue. You ask her questions, she makes a video and answers them {while looking pretty damn cute}.
Ask Ash | thenakedissue@junnnktank.com' -metadata 'title=Ask Ash No. 1' -metadata artist=JUNNNKTANK -metadata purl=https://vimeo.com/70668043 -metadata 'comment=More | junnnktank.com/thenakedissue/faq
f. Ash twitter.com/ashvandeesch
This is Ash. She'"'"'s from Holland. She'"'"'s a regular {and fucking awesome} contributor to The Naked Issue. You ask her questions, she makes a video and answers them {while looking pretty damn cute}.
Ask Ash | thenakedissue@junnnktank.com' 'file:Ask Ash No. 1-70668043.temp.mp4'
ffmpeg version 2.8.5 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 5.3.0 (GCC)
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-avisynth --enable-avresample --enable-fontconfig --enable-gnutls --enable-gpl --enable-ladspa --enable-libass --enable-libbluray --enable-libdcadec --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-shared --enable-version3 --enable-x11grab
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, mpegts, from 'file:Ask Ash No. 1-70668043.mp4':
Duration: 00:01:05.50, start: 0.700000, bitrate: 1259 kb/s
Program 1
Stream #0:0[0x101]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1280x720, 24 fps, 24 tbr, 90k tbn, 48 tbc
Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 144 kb/s
[mp4 @ 0x559d96690260] Codec for stream 0 does not use global headers but container format requires global headers
[mp4 @ 0x559d96690260] Codec for stream 1 does not use global headers but container format requires global headers
Output #0, mp4, to 'file:Ask Ash No. 1-70668043.temp.mp4':
Metadata:
date : 20130719
description : More | junnnktank.com/thenakedissue/faq
: f. Ash twitter.com/ashvandeesch
: This is Ash. She's from Holland. She's a regular {and fucking awesome} contributor to The Naked Issue. You ask her questions, she makes a video and answers them {while looking pretty damn cute}.
: Ask Ash | thenakedissue@junnnktank.com
title : Ask Ash No. 1
artist : JUNNNKTANK
purl : https://vimeo.com/70668043
comment : More | junnnktank.com/thenakedissue/faq
: f. Ash twitter.com/ashvandeesch
: This is Ash. She's from Holland. She's a regular {and fucking awesome} contributor to The Naked Issue. You ask her questions, she makes a video and answers them {while looking pretty damn cute}.
: Ask Ash | thenakedissue@junnnktank.com
encoder : Lavf56.40.101
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1280x720, q=2-31, 24 fps, 24 tbr, 90k tbn, 90k tbc
Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 44100 Hz, stereo, 144 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x559d96690260] Malformed AAC bitstream detected: use the audio bitstream filter 'aac_adtstoasc' to fix it ('-bsf:a aac_adtstoasc' option with ffmpeg)
av_interleaved_write_frame(): Operation not permitted
[mp4 @ 0x559d96690260] Malformed AAC bitstream detected: use the audio bitstream filter 'aac_adtstoasc' to fix it ('-bsf:a aac_adtstoasc' option with ffmpeg)
frame= 2 fps=0.0 q=-1.0 Lsize= 3kB time=00:00:00.13 bitrate= 151.7kbits/s
video:1kB audio:3kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
```
These lines are relevant:
```
[mp4 @ 0x559d96690260] Malformed AAC bitstream detected: use the audio bitstream filter 'aac_adtstoasc' to fix it ('-bsf:a aac_adtstoasc' option with ffmpeg)
av_interleaved_write_frame(): Operation not permitted
[mp4 @ 0x559d96690260] Malformed AAC bitstream detected: use the audio bitstream filter 'aac_adtstoasc' to fix it ('-bsf:a aac_adtstoasc' option with ffmpeg)
```
With `NativeHlsFD`, `-bsf:a aac_adtstoasc` is not applied in downloading. FFMpeg can't handle the downloaded file. @remitamine I see 'm3u8_native' is added in your commit e5c209a1bcea206bee684914599c84acf886487c. Do you have any reason for that change? I've tested a few Vimeo videos HlsFD using ffmpeg works.
If it helps, I downloaded that entire channel and encountered what I think may be a similar error but I just re-ran the rip (this is easy since I have it not clobber or retry already downloaded videos) and it passed over the problem video but this particular one wouldn't do it without dropping the `--add-metadata` flag.
92/94 videos on that channel completed without error.
@vxbinaca The problematic line is this one: https://github.com/rg3/youtube-dl/blob/ed7cd1e859cf97e975a28a5e8c58a1d1aca819fe/youtube_dl/extractor/vimeo.py#L459
Change the following lines in `youtube_dl/extractor/vimeo.py`
```
formats.extend(self._extract_m3u8_formats(
m3u8_url, video_id, 'mp4', 'm3u8_native', m3u8_id='hls', fatal=False))
```
to:
```
formats.extend(self._extract_m3u8_formats(
m3u8_url, video_id, 'mp4', m3u8_id='hls', fatal=False))
```
Works for me. However, the parameter 'm3u8_native' is added by another developer, so I have to ask his/her original intention before making changes.
> However, the parameter 'm3u8_native' is added by another developer, so I have to ask his/her original intention before making changes.
https://github.com/rg3/youtube-dl/pull/7126#discussion_r41700477
So let's call @dstftw: using `NativeHlsFD` by default is causing problems on ffmpeg like this. (OK with avconv v12_dev0-2305-g77a44f5). Is using ffmpeg to download HLS streams causing a different problem?
So do you suggest switching all `m3u8_native` to `m3u8` in all (~45) extractors?
The problem is: if `m3u8_native` is specified by the extractor, there's no simple way to force `HlsFD`. Maybe adding an option `--hls-prefer-ffmpeg` solves all the problems? If we keep current framework, replacing all `m3u8_native` occurrences is necessary for this case.
Presence of `--hls-prefer-ffmpeg` does not really solve the problem for average user who use defaults, but it would be useful.
| 2016-01-28T18:07:48Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 1737, in post_process
files_to_delete, info = pp.run(info)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/postprocessor/ffmpeg.py", line 395, in run
self.run_ffmpeg(filename, temp_filename, options)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/postprocessor/ffmpeg.py", line 159, in run_ffmpeg
self.run_ffmpeg_multiple_files([path], out_path, opts)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/postprocessor/ffmpeg.py", line 155, in run_ffmpeg_multiple_files
raise FFmpegPostProcessorError(msg)
FFmpegPostProcessorError
| 18,987 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-8843 | 1e236d7e2350e055bbe230b12490e4369aaa0956 | diff --git a/youtube_dl/extractor/extractors.py b/youtube_dl/extractor/extractors.py
--- a/youtube_dl/extractor/extractors.py
+++ b/youtube_dl/extractor/extractors.py
@@ -923,7 +923,6 @@
from .wdr import (
WDRIE,
WDRMobileIE,
- WDRMausIE,
)
from .webofstories import (
WebOfStoriesIE,
diff --git a/youtube_dl/extractor/wdr.py b/youtube_dl/extractor/wdr.py
--- a/youtube_dl/extractor/wdr.py
+++ b/youtube_dl/extractor/wdr.py
@@ -1,214 +1,205 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
-import itertools
import re
from .common import InfoExtractor
-from ..compat import (
- compat_parse_qs,
- compat_urlparse,
-)
from ..utils import (
+ determine_ext,
+ js_to_json,
+ strip_jsonp,
unified_strdate,
- qualities,
+ ExtractorError,
)
class WDRIE(InfoExtractor):
- _PLAYER_REGEX = '-(?:video|audio)player(?:_size-[LMS])?'
- _VALID_URL = r'(?P<url>https?://www\d?\.(?:wdr\d?|funkhauseuropa)\.de/)(?P<id>.+?)(?P<player>%s)?\.html' % _PLAYER_REGEX
+ _CURRENT_MAUS_URL = r'https?://(?:www\.)wdrmaus.de/(?:[^/]+/){1,2}[^/?#]+\.php5'
+ _PAGE_REGEX = r'/mediathek/(?P<media_type>[^/]+)/(?P<type>[^/]+)/(?P<display_id>.+)\.html'
+ _VALID_URL = r'(?P<page_url>https?://(?:www\d\.)?wdr\d?\.de)' + _PAGE_REGEX + '|' + _CURRENT_MAUS_URL
_TESTS = [
{
- 'url': 'http://www1.wdr.de/mediathek/video/sendungen/servicezeit/videoservicezeit560-videoplayer_size-L.html',
+ 'url': 'http://www1.wdr.de/mediathek/video/sendungen/doku-am-freitag/video-geheimnis-aachener-dom-100.html',
+ 'md5': 'e58c39c3e30077141d258bf588700a7b',
'info_dict': {
- 'id': 'mdb-362427',
+ 'id': 'mdb-1058683',
'ext': 'flv',
- 'title': 'Servicezeit',
- 'description': 'md5:c8f43e5e815eeb54d0b96df2fba906cb',
- 'upload_date': '20140310',
- 'is_live': False
- },
- 'params': {
- 'skip_download': True,
+ 'display_id': 'doku-am-freitag/video-geheimnis-aachener-dom-100',
+ 'title': 'Geheimnis Aachener Dom',
+ 'alt_title': 'Doku am Freitag',
+ 'upload_date': '20160304',
+ 'description': 'md5:87be8ff14d8dfd7a7ee46f0299b52318',
+ 'is_live': False,
+ 'subtitles': {'de': [{
+ 'url': 'http://ondemand-ww.wdr.de/medp/fsk0/105/1058683/1058683_12220974.xml'
+ }]},
},
'skip': 'Page Not Found',
},
{
- 'url': 'http://www1.wdr.de/themen/av/videomargaspiegelisttot101-videoplayer.html',
+ 'url': 'http://www1.wdr.de/mediathek/audio/wdr3/wdr3-gespraech-am-samstag/audio-schriftstellerin-juli-zeh-100.html',
+ 'md5': 'f4c1f96d01cf285240f53ea4309663d8',
'info_dict': {
- 'id': 'mdb-363194',
- 'ext': 'flv',
- 'title': 'Marga Spiegel ist tot',
- 'description': 'md5:2309992a6716c347891c045be50992e4',
- 'upload_date': '20140311',
- 'is_live': False
- },
- 'params': {
- 'skip_download': True,
+ 'id': 'mdb-1072000',
+ 'ext': 'mp3',
+ 'display_id': 'wdr3-gespraech-am-samstag/audio-schriftstellerin-juli-zeh-100',
+ 'title': 'Schriftstellerin Juli Zeh',
+ 'alt_title': 'WDR 3 Gespräch am Samstag',
+ 'upload_date': '20160312',
+ 'description': 'md5:e127d320bc2b1f149be697ce044a3dd7',
+ 'is_live': False,
+ 'subtitles': {}
},
'skip': 'Page Not Found',
},
{
- 'url': 'http://www1.wdr.de/themen/kultur/audioerlebtegeschichtenmargaspiegel100-audioplayer.html',
- 'md5': '83e9e8fefad36f357278759870805898',
+ 'url': 'http://www1.wdr.de/mediathek/video/live/index.html',
'info_dict': {
- 'id': 'mdb-194332',
- 'ext': 'mp3',
- 'title': 'Erlebte Geschichten: Marga Spiegel (29.11.2009)',
- 'description': 'md5:2309992a6716c347891c045be50992e4',
- 'upload_date': '20091129',
- 'is_live': False
+ 'id': 'mdb-103364',
+ 'ext': 'mp4',
+ 'display_id': 'index',
+ 'title': r're:^WDR Fernsehen im Livestream [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
+ 'alt_title': 'WDR Fernsehen Live',
+ 'upload_date': None,
+ 'description': 'md5:ae2ff888510623bf8d4b115f95a9b7c9',
+ 'is_live': True,
+ 'subtitles': {}
+ },
+ 'params': {
+ 'skip_download': True, # m3u8 download
},
},
{
- 'url': 'http://www.funkhauseuropa.de/av/audioflaviacoelhoamaramar100-audioplayer.html',
- 'md5': '99a1443ff29af19f6c52cf6f4dc1f4aa',
+ 'url': 'http://www1.wdr.de/mediathek/video/sendungen/aktuelle-stunde/aktuelle-stunde-120.html',
+ 'playlist_mincount': 8,
'info_dict': {
- 'id': 'mdb-478135',
- 'ext': 'mp3',
- 'title': 'Flavia Coelho: Amar é Amar',
- 'description': 'md5:7b29e97e10dfb6e265238b32fa35b23a',
- 'upload_date': '20140717',
- 'is_live': False
+ 'id': 'aktuelle-stunde/aktuelle-stunde-120',
},
- 'skip': 'Page Not Found',
},
{
- 'url': 'http://www1.wdr.de/mediathek/video/sendungen/quarks_und_co/filterseite-quarks-und-co100.html',
- 'playlist_mincount': 146,
+ 'url': 'http://www.wdrmaus.de/aktuelle-sendung/index.php5',
'info_dict': {
- 'id': 'mediathek/video/sendungen/quarks_und_co/filterseite-quarks-und-co100',
- }
+ 'id': 'mdb-1096487',
+ 'ext': 'flv',
+ 'upload_date': 're:^[0-9]{8}$',
+ 'title': 're:^Die Sendung mit der Maus vom [0-9.]{10}$',
+ 'description': '- Die Sendung mit der Maus -',
+ },
+ 'skip': 'The id changes from week to week because of the new episode'
},
{
- 'url': 'http://www1.wdr.de/mediathek/video/livestream/index.html',
+ 'url': 'http://www.wdrmaus.de/sachgeschichten/sachgeschichten/achterbahn.php5',
+ 'md5': 'ca365705551e4bd5217490f3b0591290',
'info_dict': {
- 'id': 'mdb-103364',
- 'title': 're:^WDR Fernsehen Live [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
- 'description': 'md5:ae2ff888510623bf8d4b115f95a9b7c9',
+ 'id': 'mdb-186083',
'ext': 'flv',
- 'upload_date': '20150101',
- 'is_live': True
+ 'upload_date': '20130919',
+ 'title': 'Sachgeschichte - Achterbahn ',
+ 'description': '- Die Sendung mit der Maus -',
},
'params': {
- 'skip_download': True,
+ 'skip_download': True, # the file has different versions :(
},
- }
+ },
]
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
- page_url = mobj.group('url')
- page_id = mobj.group('id')
-
- webpage = self._download_webpage(url, page_id)
-
- if mobj.group('player') is None:
+ url_type = mobj.group('type')
+ page_url = mobj.group('page_url')
+ display_id = mobj.group('display_id')
+ webpage = self._download_webpage(url, display_id)
+
+ # for wdr.de the data-extension is in a tag with the class "mediaLink"
+ # for wdrmaus its in a link to the page in a multiline "videoLink"-tag
+ json_metadata = self._html_search_regex(
+ r'class=(?:"mediaLink\b[^"]*"[^>]+|"videoLink\b[^"]*"[\s]*>\n[^\n]*)data-extension="([^"]+)"',
+ webpage, 'media link', default=None, flags=re.MULTILINE)
+
+ if not json_metadata:
entries = [
- self.url_result(page_url + href, 'WDR')
+ self.url_result(page_url + href[0], 'WDR')
for href in re.findall(
- r'<a href="/?(.+?%s\.html)" rel="nofollow"' % self._PLAYER_REGEX,
+ r'<a href="(%s)"[^>]+data-extension=' % self._PAGE_REGEX,
webpage)
]
if entries: # Playlist page
- return self.playlist_result(entries, page_id)
+ return self.playlist_result(entries, playlist_id=display_id)
- # Overview page
- entries = []
- for page_num in itertools.count(2):
- hrefs = re.findall(
- r'<li class="mediathekvideo"\s*>\s*<img[^>]*>\s*<a href="(/mediathek/video/[^"]+)"',
- webpage)
- entries.extend(
- self.url_result(page_url + href, 'WDR')
- for href in hrefs)
- next_url_m = re.search(
- r'<li class="nextToLast">\s*<a href="([^"]+)"', webpage)
- if not next_url_m:
- break
- next_url = page_url + next_url_m.group(1)
- webpage = self._download_webpage(
- next_url, page_id,
- note='Downloading playlist page %d' % page_num)
- return self.playlist_result(entries, page_id)
+ raise ExtractorError('No downloadable streams found', expected=True)
- flashvars = compat_parse_qs(self._html_search_regex(
- r'<param name="flashvars" value="([^"]+)"', webpage, 'flashvars'))
+ media_link_obj = self._parse_json(json_metadata, display_id,
+ transform_source=js_to_json)
+ jsonp_url = media_link_obj['mediaObj']['url']
- page_id = flashvars['trackerClipId'][0]
- video_url = flashvars['dslSrc'][0]
- title = flashvars['trackerClipTitle'][0]
- thumbnail = flashvars['startPicture'][0] if 'startPicture' in flashvars else None
- is_live = flashvars.get('isLive', ['0'])[0] == '1'
+ metadata = self._download_json(
+ jsonp_url, 'metadata', transform_source=strip_jsonp)
+
+ metadata_tracker_data = metadata['trackerData']
+ metadata_media_resource = metadata['mediaResource']
+
+ formats = []
+
+ # check if the metadata contains a direct URL to a file
+ metadata_media_alt = metadata_media_resource.get('alt')
+ if metadata_media_alt:
+ for tag_name in ['videoURL', 'audioURL']:
+ if tag_name in metadata_media_alt:
+ alt_url = metadata_media_alt[tag_name]
+ if determine_ext(alt_url) == 'm3u8':
+ m3u_fmt = self._extract_m3u8_formats(
+ alt_url, display_id, 'mp4', 'm3u8_native',
+ m3u8_id='hls')
+ formats.extend(m3u_fmt)
+ else:
+ formats.append({
+ 'url': alt_url
+ })
+
+ # check if there are flash-streams for this video
+ if 'dflt' in metadata_media_resource and 'videoURL' in metadata_media_resource['dflt']:
+ video_url = metadata_media_resource['dflt']['videoURL']
+ if video_url.endswith('.f4m'):
+ full_video_url = video_url + '?hdcore=3.2.0&plugin=aasp-3.2.0.77.18'
+ formats.extend(self._extract_f4m_formats(full_video_url, display_id, f4m_id='hds', fatal=False))
+ elif video_url.endswith('.smil'):
+ formats.extend(self._extract_smil_formats(video_url, 'stream', fatal=False))
+
+ subtitles = {}
+ caption_url = metadata_media_resource.get('captionURL')
+ if caption_url:
+ subtitles['de'] = [{
+ 'url': caption_url
+ }]
+
+ title = metadata_tracker_data.get('trackerClipTitle')
+ is_live = url_type == 'live'
if is_live:
title = self._live_title(title)
-
- if 'trackerClipAirTime' in flashvars:
- upload_date = flashvars['trackerClipAirTime'][0]
+ upload_date = None
+ elif 'trackerClipAirTime' in metadata_tracker_data:
+ upload_date = metadata_tracker_data['trackerClipAirTime']
else:
- upload_date = self._html_search_meta(
- 'DC.Date', webpage, 'upload date')
+ upload_date = self._html_search_meta('DC.Date', webpage, 'upload date')
if upload_date:
upload_date = unified_strdate(upload_date)
- formats = []
- preference = qualities(['S', 'M', 'L', 'XL'])
-
- if video_url.endswith('.f4m'):
- formats.extend(self._extract_f4m_formats(
- video_url + '?hdcore=3.2.0&plugin=aasp-3.2.0.77.18', page_id,
- f4m_id='hds', fatal=False))
- elif video_url.endswith('.smil'):
- formats.extend(self._extract_smil_formats(
- video_url, page_id, False, {
- 'hdcore': '3.3.0',
- 'plugin': 'aasp-3.3.0.99.43',
- }))
- else:
- formats.append({
- 'url': video_url,
- 'http_headers': {
- 'User-Agent': 'mobile',
- },
- })
-
- m3u8_url = self._search_regex(
- r'rel="adaptiv"[^>]+href="([^"]+)"',
- webpage, 'm3u8 url', default=None)
- if m3u8_url:
- formats.extend(self._extract_m3u8_formats(
- m3u8_url, page_id, 'mp4', 'm3u8_native',
- m3u8_id='hls', fatal=False))
-
- direct_urls = re.findall(
- r'rel="web(S|M|L|XL)"[^>]+href="([^"]+)"', webpage)
- if direct_urls:
- for quality, video_url in direct_urls:
- formats.append({
- 'url': video_url,
- 'preference': preference(quality),
- 'http_headers': {
- 'User-Agent': 'mobile',
- },
- })
-
self._sort_formats(formats)
- description = self._html_search_meta('Description', webpage, 'description')
-
return {
- 'id': page_id,
- 'formats': formats,
+ 'id': metadata_tracker_data.get('trackerClipId', display_id),
+ 'display_id': display_id,
'title': title,
- 'description': description,
- 'thumbnail': thumbnail,
+ 'alt_title': metadata_tracker_data.get('trackerClipSubcategory'),
+ 'formats': formats,
'upload_date': upload_date,
- 'is_live': is_live
+ 'description': self._html_search_meta('Description', webpage),
+ 'is_live': is_live,
+ 'subtitles': subtitles,
}
@@ -241,81 +232,3 @@ def _real_extract(self, url):
'User-Agent': 'mobile',
},
}
-
-
-class WDRMausIE(InfoExtractor):
- _VALID_URL = r'https?://(?:www\.)?wdrmaus\.de/(?:[^/]+/){,2}(?P<id>[^/?#]+)(?:/index\.php5|(?<!index)\.php5|/(?:$|[?#]))'
- IE_DESC = 'Sendung mit der Maus'
- _TESTS = [{
- 'url': 'http://www.wdrmaus.de/aktuelle-sendung/index.php5',
- 'info_dict': {
- 'id': 'aktuelle-sendung',
- 'ext': 'mp4',
- 'thumbnail': 're:^http://.+\.jpg',
- 'upload_date': 're:^[0-9]{8}$',
- 'title': 're:^[0-9.]{10} - Aktuelle Sendung$',
- }
- }, {
- 'url': 'http://www.wdrmaus.de/sachgeschichten/sachgeschichten/40_jahre_maus.php5',
- 'md5': '3b1227ca3ed28d73ec5737c65743b2a3',
- 'info_dict': {
- 'id': '40_jahre_maus',
- 'ext': 'mp4',
- 'thumbnail': 're:^http://.+\.jpg',
- 'upload_date': '20131007',
- 'title': '12.03.2011 - 40 Jahre Maus',
- }
- }]
-
- def _real_extract(self, url):
- video_id = self._match_id(url)
-
- webpage = self._download_webpage(url, video_id)
- param_code = self._html_search_regex(
- r'<a href="\?startVideo=1&([^"]+)"', webpage, 'parameters')
-
- title_date = self._search_regex(
- r'<div class="sendedatum"><p>Sendedatum:\s*([0-9\.]+)</p>',
- webpage, 'air date')
- title_str = self._html_search_regex(
- r'<h1>(.*?)</h1>', webpage, 'title')
- title = '%s - %s' % (title_date, title_str)
- upload_date = unified_strdate(
- self._html_search_meta('dc.date', webpage))
-
- fields = compat_parse_qs(param_code)
- video_url = fields['firstVideo'][0]
- thumbnail = compat_urlparse.urljoin(url, fields['startPicture'][0])
-
- formats = [{
- 'format_id': 'rtmp',
- 'url': video_url,
- }]
-
- jscode = self._download_webpage(
- 'http://www.wdrmaus.de/codebase/js/extended-medien.min.js',
- video_id, fatal=False,
- note='Downloading URL translation table',
- errnote='Could not download URL translation table')
- if jscode:
- for m in re.finditer(
- r"stream:\s*'dslSrc=(?P<stream>[^']+)',\s*download:\s*'(?P<dl>[^']+)'\s*\}",
- jscode):
- if video_url.startswith(m.group('stream')):
- http_url = video_url.replace(
- m.group('stream'), m.group('dl'))
- formats.append({
- 'format_id': 'http',
- 'url': http_url,
- })
- break
-
- self._sort_formats(formats)
-
- return {
- 'id': video_id,
- 'title': title,
- 'formats': formats,
- 'thumbnail': thumbnail,
- 'upload_date': upload_date,
- }
| WDR Maus stopped working
Downloading the current "Sendung mit der Maus" does not work. Verbose output:
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'http://www.wdrmaus.de/aktuelle-sendung/index.php5', u'--verbose']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2016.02.13
[debug] Python version 2.7.9 - Linux-4.0.0-kali1-amd64-x86_64-with-Kali-2.0-sana
[debug] exe versions: none
[debug] Proxy map: {}
[WDRMaus] aktuelle-sendung: Downloading webpage
ERROR: Unable to extract parameters; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 666, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 315, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/wdr.py", line 275, in _real_extract
r'<a href="\?startVideo=1&([^"]+)"', webpage, 'parameters')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 619, in _html_search_regex
res = self._search_regex(pattern, string, name, default, fatal, flags, group)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 610, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract parameters; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
| 2016-03-12T19:31:04Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 666, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 315, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/wdr.py", line 275, in _real_extract
r'<a href="\?startVideo=1&([^"]+)"', webpage, 'parameters')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 619, in _html_search_regex
res = self._search_regex(pattern, string, name, default, fatal, flags, group)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 610, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract parameters; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 18,994 |
||||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-8898 | 782b1b5bd1cdaaead6865dee5d300486e7dd8348 | diff --git a/youtube_dl/__init__.py b/youtube_dl/__init__.py
--- a/youtube_dl/__init__.py
+++ b/youtube_dl/__init__.py
@@ -144,14 +144,20 @@ def _real_main(argv=None):
if numeric_limit is None:
parser.error('invalid max_filesize specified')
opts.max_filesize = numeric_limit
- if opts.retries is not None:
- if opts.retries in ('inf', 'infinite'):
- opts_retries = float('inf')
+
+ def parse_retries(retries):
+ if retries in ('inf', 'infinite'):
+ parsed_retries = float('inf')
else:
try:
- opts_retries = int(opts.retries)
+ parsed_retries = int(retries)
except (TypeError, ValueError):
parser.error('invalid retry count specified')
+ return parsed_retries
+ if opts.retries is not None:
+ opts.retries = parse_retries(opts.retries)
+ if opts.fragment_retries is not None:
+ opts.fragment_retries = parse_retries(opts.fragment_retries)
if opts.buffersize is not None:
numeric_buffersize = FileDownloader.parse_bytes(opts.buffersize)
if numeric_buffersize is None:
@@ -299,7 +305,8 @@ def _real_main(argv=None):
'force_generic_extractor': opts.force_generic_extractor,
'ratelimit': opts.ratelimit,
'nooverwrites': opts.nooverwrites,
- 'retries': opts_retries,
+ 'retries': opts.retries,
+ 'fragment_retries': opts.fragment_retries,
'buffersize': opts.buffersize,
'noresizebuffer': opts.noresizebuffer,
'continuedl': opts.continue_dl,
diff --git a/youtube_dl/downloader/common.py b/youtube_dl/downloader/common.py
--- a/youtube_dl/downloader/common.py
+++ b/youtube_dl/downloader/common.py
@@ -115,6 +115,10 @@ def format_speed(speed):
return '%10s' % '---b/s'
return '%10s' % ('%s/s' % format_bytes(speed))
+ @staticmethod
+ def format_retries(retries):
+ return 'inf' if retries == float('inf') else '%.0f' % retries
+
@staticmethod
def best_block_size(elapsed_time, bytes):
new_min = max(bytes / 2.0, 1.0)
@@ -297,7 +301,9 @@ def report_resuming_byte(self, resume_len):
def report_retry(self, count, retries):
"""Report retry in case of HTTP error 5xx"""
- self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %.0f)...' % (count, retries))
+ self.to_screen(
+ '[download] Got server HTTP error. Retrying (attempt %d of %s)...'
+ % (count, self.format_retries(retries)))
def report_file_already_downloaded(self, file_name):
"""Report file has already been fully downloaded."""
diff --git a/youtube_dl/downloader/dash.py b/youtube_dl/downloader/dash.py
--- a/youtube_dl/downloader/dash.py
+++ b/youtube_dl/downloader/dash.py
@@ -4,6 +4,7 @@
import re
from .fragment import FragmentFD
+from ..compat import compat_urllib_error
from ..utils import (
sanitize_open,
encodeFilename,
@@ -36,20 +37,41 @@ def combine_url(base_url, target_url):
segments_filenames = []
- def append_url_to_file(target_url, target_filename):
- success = ctx['dl'].download(target_filename, {'url': combine_url(base_url, target_url)})
- if not success:
+ fragment_retries = self.params.get('fragment_retries', 0)
+
+ def append_url_to_file(target_url, tmp_filename, segment_name):
+ target_filename = '%s-%s' % (tmp_filename, segment_name)
+ count = 0
+ while count <= fragment_retries:
+ try:
+ success = ctx['dl'].download(target_filename, {'url': combine_url(base_url, target_url)})
+ if not success:
+ return False
+ down, target_sanitized = sanitize_open(target_filename, 'rb')
+ ctx['dest_stream'].write(down.read())
+ down.close()
+ segments_filenames.append(target_sanitized)
+ break
+ except (compat_urllib_error.HTTPError, ) as err:
+ # YouTube may often return 404 HTTP error for a fragment causing the
+ # whole download to fail. However if the same fragment is immediately
+ # retried with the same request data this usually succeeds (1-2 attemps
+ # is usually enough) thus allowing to download the whole file successfully.
+ # So, we will retry all fragments that fail with 404 HTTP error for now.
+ if err.code != 404:
+ raise
+ # Retry fragment
+ count += 1
+ if count <= fragment_retries:
+ self.report_retry_fragment(segment_name, count, fragment_retries)
+ if count > fragment_retries:
+ self.report_error('giving up after %s fragment retries' % fragment_retries)
return False
- down, target_sanitized = sanitize_open(target_filename, 'rb')
- ctx['dest_stream'].write(down.read())
- down.close()
- segments_filenames.append(target_sanitized)
if initialization_url:
- append_url_to_file(initialization_url, ctx['tmpfilename'] + '-Init')
+ append_url_to_file(initialization_url, ctx['tmpfilename'], 'Init')
for i, segment_url in enumerate(segment_urls):
- segment_filename = '%s-Seg%d' % (ctx['tmpfilename'], i)
- append_url_to_file(segment_url, segment_filename)
+ append_url_to_file(segment_url, ctx['tmpfilename'], 'Seg%d' % i)
self._finish_frag_download(ctx)
diff --git a/youtube_dl/downloader/fragment.py b/youtube_dl/downloader/fragment.py
--- a/youtube_dl/downloader/fragment.py
+++ b/youtube_dl/downloader/fragment.py
@@ -19,8 +19,17 @@ def to_screen(self, *args, **kargs):
class FragmentFD(FileDownloader):
"""
A base file downloader class for fragmented media (e.g. f4m/m3u8 manifests).
+
+ Available options:
+
+ fragment_retries: Number of times to retry a fragment for HTTP error (DASH only)
"""
+ def report_retry_fragment(self, fragment_name, count, retries):
+ self.to_screen(
+ '[download] Got server HTTP error. Retrying fragment %s (attempt %d of %s)...'
+ % (fragment_name, count, self.format_retries(retries)))
+
def _prepare_and_start_frag_download(self, ctx):
self._prepare_frag_download(ctx)
self._start_frag_download(ctx)
diff --git a/youtube_dl/options.py b/youtube_dl/options.py
--- a/youtube_dl/options.py
+++ b/youtube_dl/options.py
@@ -399,6 +399,10 @@ def _hide_login_info(opts):
'-R', '--retries',
dest='retries', metavar='RETRIES', default=10,
help='Number of retries (default is %default), or "infinite".')
+ downloader.add_option(
+ '--fragment-retries',
+ dest='fragment_retries', metavar='RETRIES', default=10,
+ help='Number of retries for a fragment (default is %default), or "infinite" (DASH only)')
downloader.add_option(
'--buffer-size',
dest='buffersize', metavar='SIZE', default='1024',
| Dash errors confirmed and download from specified file index question
Confirm often errors with DashSegments!
Question: is it possible to fast-download from certain index position of file?
For my examples when it fails at 4000+ /12000 file, is it possible (or it is a request?) to download from this 4000 file, or next file.
-c (--continue) - doesnt help.
I know there is -i (ignore errors) attribute, but...
Filtering --beforedate helps a little, but again, youtube-dl cycles through all 4000 list from scratch with downloading meta and dash manifests for each - so its 10-12 minutes now, and more in perspective. So it good to have something like --startfrom 4122 so it fast skip 4121 lines from playlists pages.
Back to DashSegments error:
```
[download] Downloading video 4284 of 12889
[youtube] j9s_ZDloG2g: Downloading webpage
[youtube] j9s_ZDloG2g: Downloading video info webpage
[youtube] j9s_ZDloG2g: Extracting video information
[youtube] j9s_ZDloG2g: Downloading DASH manifest
[youtube] j9s_ZDloG2g: Downloading DASH manifest
[debug] Invoking downloader on u'https://r2---sn-jvhnu5g-c35l.googlevideo.com/vi
deoplayback/id/8fdb3f6439681b68/itag/135/source/yt_otf/requiressl/yes/ms/au/pl/2
0/mn/sn-jvhnu5g-c35l/mm/31/mv/m/ratebypass/yes/mime/video%2Fmp4/otfp/1/otf/1/lmt
/1447998553518628/upn/KI9nCpVF_40/signature/198C0C9625F51077B2CAE13AE3F267EFAF2C
74B4.470D1AA30677B1CE0D2F10A2EDF768D6F788D67F/fexp/9415031,9416126,9416916,94174
87,9418199,9420452,9420539,9422596,9423661,9423662,9425972,9426719,9427107,94282
91,9428437/mt/1454961290/key/dg_yt0/sver/3/ip/91.78.243.12/ipbits/0/expire/14549
83027/sparams/ip,ipbits,expire,id,itag,source,requiressl,ms,pl,mn,mm,mv,ratebypa
ss,mime,otfp,otf,lmt/'
[download] Destination: YDK\SAM_0096.20151119.f135.mp4
[DashSegments] j9s_ZDloG2g: Downloading initialization segment
[DashSegments] j9s_ZDloG2g: Downloading segment 1 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 2 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 3 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 4 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 5 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 6 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 7 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 8 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 9 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 10 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 11 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 12 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 13 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 14 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 15 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 16 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 17 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 18 / 49
[DashSegments] j9s_ZDloG2g: Downloading segment 19 / 49
ERROR: unable to download video data: HTTP Error 404: Not Found
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 1616, in process_info
File "youtube_dl\YoutubeDL.pyo", line 1564, in dl
File "youtube_dl\downloader\common.pyo", line 343, in download
File "youtube_dl\downloader\dash.pyo", line 50, in real_download
File "youtube_dl\downloader\dash.pyo", line 29, in append_url_to_file
File "youtube_dl\YoutubeDL.pyo", line 1903, in urlopen
File "urllib2.pyo", line 437, in open
File "urllib2.pyo", line 550, in http_response
File "urllib2.pyo", line 469, in error
File "urllib2.pyo", line 409, in _call_chain
File "urllib2.pyo", line 656, in http_error_302
File "urllib2.pyo", line 437, in open
File "urllib2.pyo", line 550, in http_response
File "urllib2.pyo", line 469, in error
File "urllib2.pyo", line 409, in _call_chain
File "urllib2.pyo", line 656, in http_error_302
File "urllib2.pyo", line 437, in open
File "urllib2.pyo", line 550, in http_response
File "urllib2.pyo", line 475, in error
File "urllib2.pyo", line 409, in _call_chain
File "urllib2.pyo", line 558, in http_error_default
HTTPError: HTTP Error 404: Not Found]
```
Sometimes its pass OK, sometimes it fail on audio ( .m4a)
It was no errors until >4000 files (DashSegments appears)
My commandline looks like (if it matters at all):
`youtube-dl -f "135+(140/171/bestaudio)/bestvideo[height<720]+bestaudio" -o "/YDK/%(title)s.%(upload_date)s.%(ext)s" >here link to videos page on YT< --restrict-filenames --verbose`
Sorry for my english and all, first time submit )) Thx.
UPD: oh, i miss begining debug part ))
```
C:\Users\Daho\AppData\Roaming\youtube-dlg>youtube-dl
[debug] System config: []
[debug] User config: [u'-f', u'135+(140/171/bestaudio)/bestvideo[height<720]+bes
taudio', u'-o', u'/YDK/%(title)s.%(upload_date)s.%(ext)s', u'https://ww
w.youtube.com/channel/UCeRXV0lrOTQMKEdvcP_z0Pg/videos', u'--restrict-filenames',
u'--dateafter', u'20150831', u'--datebefore', u'20151119', u'--verbose']
[debug] Command-line args: []
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2016.02.05.1
[debug] Python version 2.7.10 - Windows-8-6.2.9200
[debug] exe versions: ffmpeg N-71883-geb9fb50, ffprobe N-71883-geb9fb50
[debug] Proxy map: {}
```
| ```
--playlist-start NUMBER Playlist video to start at (default is 1)
```
my comment about this problem now. I think it's important. every time I download a file from this error again, the segments downloaded in previous attempts are downloaded much faster and each time a file is downloaded more. but every time it is downloaded from the first segment, and it is a problem if large files!
I think that the downloaded segments cached somewhere. but the main problem is that it is necessary that such files are downloaded automatically. moreover, often have to start the download of the same file many times.
The interesting thing is that if you watch video that fails to download in your browser till the end. then youtube-dl will be able to download it.
I'm facing the same issue. Could you please tell in which update will this problem will be fixed? Thanks
| 2016-03-19T15:11:18Z | [] | [] |
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 1616, in process_info
File "youtube_dl\YoutubeDL.pyo", line 1564, in dl
File "youtube_dl\downloader\common.pyo", line 343, in download
File "youtube_dl\downloader\dash.pyo", line 50, in real_download
File "youtube_dl\downloader\dash.pyo", line 29, in append_url_to_file
File "youtube_dl\YoutubeDL.pyo", line 1903, in urlopen
File "urllib2.pyo", line 437, in open
File "urllib2.pyo", line 550, in http_response
File "urllib2.pyo", line 469, in error
File "urllib2.pyo", line 409, in _call_chain
File "urllib2.pyo", line 656, in http_error_302
File "urllib2.pyo", line 437, in open
File "urllib2.pyo", line 550, in http_response
File "urllib2.pyo", line 469, in error
File "urllib2.pyo", line 409, in _call_chain
File "urllib2.pyo", line 656, in http_error_302
File "urllib2.pyo", line 437, in open
File "urllib2.pyo", line 550, in http_response
File "urllib2.pyo", line 475, in error
File "urllib2.pyo", line 409, in _call_chain
File "urllib2.pyo", line 558, in http_error_default
HTTPError: HTTP Error 404: Not Found]
| 18,997 |
|||
ytdl-org/youtube-dl | ytdl-org__youtube-dl-9324 | 065216d94f59953a228d2683d3bafe4241fd1e29 | diff --git a/youtube_dl/extractor/common.py b/youtube_dl/extractor/common.py
--- a/youtube_dl/extractor/common.py
+++ b/youtube_dl/extractor/common.py
@@ -1061,7 +1061,7 @@ def _parse_f4m_formats(self, manifest, manifest_url, video_id, preference=None,
def _extract_m3u8_formats(self, m3u8_url, video_id, ext=None,
entry_protocol='m3u8', preference=None,
m3u8_id=None, note=None, errnote=None,
- fatal=True):
+ fatal=True, live=False):
formats = [{
'format_id': '-'.join(filter(None, [m3u8_id, 'meta'])),
@@ -1139,7 +1139,11 @@ def _extract_m3u8_formats(self, m3u8_url, video_id, ext=None,
if m3u8_id:
format_id.append(m3u8_id)
last_media_name = last_media.get('NAME') if last_media and last_media.get('TYPE') != 'SUBTITLES' else None
- format_id.append(last_media_name if last_media_name else '%d' % (tbr if tbr else len(formats)))
+ # Bandwidth of live streams may differ over time thus making
+ # format_id unpredictable. So it's better to keep provided
+ # format_id intact.
+ if last_media_name and not live:
+ format_id.append(last_media_name if last_media_name else '%d' % (tbr if tbr else len(formats)))
f = {
'format_id': '-'.join(format_id),
'url': format_url(line.strip()),
diff --git a/youtube_dl/extractor/vlive.py b/youtube_dl/extractor/vlive.py
--- a/youtube_dl/extractor/vlive.py
+++ b/youtube_dl/extractor/vlive.py
@@ -1,8 +1,11 @@
# coding: utf-8
-from __future__ import unicode_literals
+from __future__ import division, unicode_literals
+import re
+import time
from .common import InfoExtractor
from ..utils import (
+ ExtractorError,
dict_get,
float_or_none,
int_or_none,
@@ -31,16 +34,77 @@ def _real_extract(self, url):
webpage = self._download_webpage(
'http://www.vlive.tv/video/%s' % video_id, video_id)
- long_video_id = self._search_regex(
- r'vlive\.tv\.video\.ajax\.request\.handler\.init\(\s*"[0-9]+"\s*,\s*"[^"]*"\s*,\s*"([^"]+)"',
- webpage, 'long video id')
+ # UTC+x - UTC+9 (KST)
+ tz = time.altzone if time.localtime().tm_isdst == 1 else time.timezone
+ tz_offset = -tz // 60 - 9 * 60
+ self._set_cookie('vlive.tv', 'timezoneOffset', '%d' % tz_offset)
- key = self._search_regex(
- r'vlive\.tv\.video\.ajax\.request\.handler\.init\(\s*"[0-9]+"\s*,\s*"[^"]*"\s*,\s*"[^"]+"\s*,\s*"([^"]+)"',
- webpage, 'key')
+ status_params = self._download_json(
+ 'http://www.vlive.tv/video/status?videoSeq=%s' % video_id,
+ video_id, 'Downloading JSON status',
+ headers={'Referer': url})
+ status = status_params.get('status')
+ air_start = status_params.get('onAirStartAt', '')
+ is_live = status_params.get('isLive')
+ video_params = self._search_regex(
+ r'vlive\.tv\.video\.ajax\.request\.handler\.init\((.+)\)',
+ webpage, 'video params')
+ live_params, long_video_id, key = re.split(
+ r'"\s*,\s*"', video_params)[1:4]
+
+ if status == 'LIVE_ON_AIR' or status == 'BIG_EVENT_ON_AIR':
+ live_params = self._parse_json('"%s"' % live_params, video_id)
+ live_params = self._parse_json(live_params, video_id)
+ return self._live(video_id, webpage, live_params)
+ elif status == 'VOD_ON_AIR' or status == 'BIG_EVENT_INTRO':
+ if long_video_id and key:
+ return self._replay(video_id, webpage, long_video_id, key)
+ elif is_live:
+ status = 'LIVE_END'
+ else:
+ status = 'COMING_SOON'
+
+ if status == 'LIVE_END':
+ raise ExtractorError('Uploading for replay. Please wait...',
+ expected=True)
+ elif status == 'COMING_SOON':
+ raise ExtractorError('Coming soon! %s' % air_start, expected=True)
+ elif status == 'CANCELED':
+ raise ExtractorError('We are sorry, '
+ 'but the live broadcast has been canceled.',
+ expected=True)
+ else:
+ raise ExtractorError('Unknown status %s' % status)
+
+ def _get_common_fields(self, webpage):
title = self._og_search_title(webpage)
+ creator = self._html_search_regex(
+ r'<div[^>]+class="info_area"[^>]*>\s*<a\s+[^>]*>([^<]+)',
+ webpage, 'creator', fatal=False)
+ thumbnail = self._og_search_thumbnail(webpage)
+ return {
+ 'title': title,
+ 'creator': creator,
+ 'thumbnail': thumbnail,
+ }
+ def _live(self, video_id, webpage, live_params):
+ formats = []
+ for vid in live_params.get('resolutions', []):
+ formats.extend(self._extract_m3u8_formats(
+ vid['cdnUrl'], video_id, 'mp4',
+ m3u8_id=vid.get('name'),
+ fatal=False, live=True))
+ self._sort_formats(formats)
+
+ return dict(self._get_common_fields(webpage),
+ id=video_id,
+ formats=formats,
+ is_live=True,
+ )
+
+ def _replay(self, video_id, webpage, long_video_id, key):
playinfo = self._download_json(
'http://global.apis.naver.com/rmcnmv/rmcnmv/vod_play_videoInfo.json?%s'
% compat_urllib_parse_urlencode({
@@ -62,11 +126,6 @@ def _real_extract(self, url):
} for vid in playinfo.get('videos', {}).get('list', []) if vid.get('source')]
self._sort_formats(formats)
- thumbnail = self._og_search_thumbnail(webpage)
- creator = self._html_search_regex(
- r'<div[^>]+class="info_area"[^>]*>\s*<a\s+[^>]*>([^<]+)',
- webpage, 'creator', fatal=False)
-
view_count = int_or_none(playinfo.get('meta', {}).get('count'))
subtitles = {}
@@ -77,12 +136,9 @@ def _real_extract(self, url):
'ext': 'vtt',
'url': caption['source']}]
- return {
- 'id': video_id,
- 'title': title,
- 'creator': creator,
- 'thumbnail': thumbnail,
- 'view_count': view_count,
- 'formats': formats,
- 'subtitles': subtitles,
- }
+ return dict(self._get_common_fields(webpage),
+ id=video_id,
+ formats=formats,
+ view_count=view_count,
+ subtitles=subtitles,
+ )
| [vlive] Unable to download live videos
### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.04.24_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.24**
### Before submitting an _issue_ make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [x] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [x] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### If the purpose of this _issue_ is a _bug report_, _site support request_ or you are not completely sure provide the full verbose output as follows:
(Note that this live might end when you check it.)
```
$ youtube-dl -v 'http://www.vlive.tv/video/7779/퇴근-저흰출근방송'
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.vlive.tv/video/7779/퇴근-저흰출근방송']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2016.04.24
[debug] Python version 3.4.3 - Linux-4.5.0-gentoo-x86_64-Intel-R-_Core-TM-_i7-3820_CPU_@_3.60GHz-with-gentoo-2.2
[debug] exe versions: ffmpeg N-79272-gbd1dcaf, ffprobe N-79272-gbd1dcaf, rtmpdump 2.4
[debug] Proxy map: {}
[vlive] 7779: Downloading webpage
ERROR: Unable to extract long video id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 673, in extract_info
ie_result = ie.extract(url)
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/extractor/common.py", line 341, in extract
return self._real_extract(url)
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/extractor/vlive.py", line 36, in _real_extract
webpage, 'long video id')
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/extractor/common.py", line 644, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
youtube_dl.utils.RegexNotFoundError: Unable to extract long video id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
---
### If the purpose of this _issue_ is a _site support request_ please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Live video: has the exact same type of URLs, just marked as live videos at the site; check fresh lives [here](http://www.vlive.tv/home)
---
### Description of your _issue_, suggested solution and other information
(Not sure which category is best suited for this issue, sorry.)
Lives are served with Flash through HLS streams, there is JSON with streams parameters at the code page:
```
vlive.tv.video.ajax.request.handler.init("7779", "{\"liveStatus\":\"LIVE\",\"resolutions\":[
{\"name\":\"audio\",\"width\":null,\"height\":null,\"cdnUrl\":\"http:\/\/vlive.hls.edgesuite.net\/kr002\/alow.stream\/playlist.m3u8?__gda__=1461682617_f30c9b77b4d56a116b0e436d36b7b1d1\",\"additionalProperties\":{}},
{\"name\":\"250\",\"width\":320,\"height\":184,\"cdnUrl\":\"http:\/\/vlive.hls.edgesuite.net\/kr002\/250.stream\/playlist.m3u8?__gda__=1461682617_5bd3a23663a5cd54dc204da558ff00e5\",\"additionalProperties\":{}},
{\"name\":\"400\",\"width\":640,\"height\":368,\"cdnUrl\":\"http:\/\/vlive.hls.edgesuite.net\/kr002\/400.stream\/playlist.m3u8?__gda__=1461682617_62f2ced6228651d617e7a7b9fff0da6a\",\"additionalProperties\":{}},
{\"name\":\"1200\",\"width\":640,\"height\":368,\"cdnUrl\":\"http:\/\/vlive.hls.edgesuite.net\/kr002\/1200.stream\/playlist.m3u8?__gda__=1461682617_88ce3b5362dc63e50439be087164e98b\",\"additionalProperties\":{}}
],\"additionalProperties\":{}}", "", "", "", "");
```
| 2016-04-26T15:02:37Z | [] | [] |
Traceback (most recent call last):
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 673, in extract_info
ie_result = ie.extract(url)
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/extractor/common.py", line 341, in extract
return self._real_extract(url)
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/extractor/vlive.py", line 36, in _real_extract
webpage, 'long video id')
File "/home/kagami/.local/bin/youtube-dl/youtube_dl/extractor/common.py", line 644, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
youtube_dl.utils.RegexNotFoundError: Unable to extract long video id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| 19,001 |