mirror of
https://github.com/l1ving/youtube-dl
synced 2025-02-03 20:16:06 +08:00
Merge branch 'master' into BlenderCloud-issue-13282
This commit is contained in:
commit
63e9427973
16
.github/ISSUE_TEMPLATE.md
vendored
16
.github/ISSUE_TEMPLATE.md
vendored
@ -1,16 +1,16 @@
|
||||
## Please follow the guide below
|
||||
|
||||
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
|
||||
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
|
||||
- Use *Preview* tab to see how your issue will actually look like
|
||||
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
|
||||
- Use the *Preview* tab to see what your issue will actually look like
|
||||
|
||||
---
|
||||
|
||||
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.07.09*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
|
||||
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2017.07.09**
|
||||
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.08.18*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
|
||||
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2017.08.18**
|
||||
|
||||
### Before submitting an *issue* make sure you have:
|
||||
- [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
|
||||
- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
|
||||
- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
|
||||
|
||||
### What is the purpose of your *issue*?
|
||||
@ -28,14 +28,14 @@
|
||||
|
||||
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
|
||||
|
||||
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
|
||||
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
|
||||
|
||||
```
|
||||
$ youtube-dl -v <your command line>
|
||||
[debug] System config: []
|
||||
[debug] User config: []
|
||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
||||
[debug] youtube-dl version 2017.07.09
|
||||
[debug] youtube-dl version 2017.08.18
|
||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
||||
[debug] Proxy map: {}
|
||||
|
12
.github/ISSUE_TEMPLATE_tmpl.md
vendored
12
.github/ISSUE_TEMPLATE_tmpl.md
vendored
@ -1,16 +1,16 @@
|
||||
## Please follow the guide below
|
||||
|
||||
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
|
||||
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
|
||||
- Use *Preview* tab to see how your issue will actually look like
|
||||
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
|
||||
- Use the *Preview* tab to see what your issue will actually look like
|
||||
|
||||
---
|
||||
|
||||
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *%(version)s*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
|
||||
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *%(version)s*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
|
||||
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **%(version)s**
|
||||
|
||||
### Before submitting an *issue* make sure you have:
|
||||
- [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
|
||||
- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
|
||||
- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
|
||||
|
||||
### What is the purpose of your *issue*?
|
||||
@ -28,9 +28,9 @@
|
||||
|
||||
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
|
||||
|
||||
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
|
||||
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
|
||||
|
||||
```
|
||||
$ youtube-dl -v <your command line>
|
||||
[debug] System config: []
|
||||
[debug] User config: []
|
||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||
|
1
AUTHORS
1
AUTHORS
@ -223,3 +223,4 @@ Jan Kundrát
|
||||
Giuseppe Fabiano
|
||||
Örn Guðjónsson
|
||||
Parmjit Virk
|
||||
Genki Sky
|
||||
|
156
ChangeLog
156
ChangeLog
@ -1,7 +1,163 @@
|
||||
version <unreleased>
|
||||
|
||||
Core
|
||||
* [utils] Fix unescapeHTML for misformed string like "&a"" (#13935)
|
||||
|
||||
Extractors
|
||||
+ [liveleak] Support multi-video pages (#6542)
|
||||
+ [liveleak] Support another liveleak embedding pattern (#13336)
|
||||
* [cda] Fix extraction (#13935)
|
||||
|
||||
|
||||
version 2017.08.18
|
||||
|
||||
Core
|
||||
* [YoutubeDL] Sanitize byte string format URLs (#13951)
|
||||
+ [extractor/common] Add support for float durations in _parse_mpd_formats
|
||||
(#13919)
|
||||
|
||||
Extractors
|
||||
* [arte] Detect unavailable videos (#13945)
|
||||
* [generic] Convert redirect URLs to unicode strings (#13951)
|
||||
* [udemy] Fix paid course detection (#13943)
|
||||
* [pluralsight] Use RPC API for course extraction (#13937)
|
||||
+ [clippit] Add support for clippituser.tv
|
||||
+ [qqmusic] Support new URL schemes (#13805)
|
||||
* [periscope] Renew HLS extraction (#13917)
|
||||
* [mixcloud] Extract decrypt key
|
||||
|
||||
|
||||
version 2017.08.13
|
||||
|
||||
Core
|
||||
* [YoutubeDL] Make sure format id is not empty
|
||||
* [extractor/common] Make _family_friendly_search optional
|
||||
* [extractor/common] Respect source's type attribute for HTML5 media (#13892)
|
||||
|
||||
Extractors
|
||||
* [pornhub:playlistbase] Skip videos from drop-down menu (#12819, #13902)
|
||||
+ [fourtube] Add support pornerbros.com (#6022)
|
||||
+ [fourtube] Add support porntube.com (#7859, #13901)
|
||||
+ [fourtube] Add support fux.com
|
||||
* [limelight] Improve embeds detection (#13895)
|
||||
+ [reddit] Add support for v.redd.it and reddit.com (#13847)
|
||||
* [aparat] Extract all formats (#13887)
|
||||
* [mixcloud] Fix play info decryption (#13885)
|
||||
+ [generic] Add support for vzaar embeds (#13876)
|
||||
|
||||
|
||||
version 2017.08.09
|
||||
|
||||
Core
|
||||
* [utils] Skip missing params in cli_bool_option (#13865)
|
||||
|
||||
Extractors
|
||||
* [xxxymovies] Fix title extraction (#13868)
|
||||
+ [nick] Add support for nick.com.pl (#13860)
|
||||
* [mixcloud] Fix play info decryption (#13867)
|
||||
* [20min] Fix embeds extraction (#13852)
|
||||
* [dplayit] Fix extraction (#13851)
|
||||
+ [niconico] Support videos with multiple formats (#13522)
|
||||
+ [niconico] Support HTML5-only videos (#13806)
|
||||
|
||||
|
||||
version 2017.08.06
|
||||
|
||||
Core
|
||||
* Use relative paths for DASH fragments (#12990)
|
||||
|
||||
Extractors
|
||||
* [pluralsight] Fix format selection
|
||||
- [mpora] Remove extractor (#13826)
|
||||
+ [voot] Add support for voot.com (#10255, #11644, #11814, #12350, #13218)
|
||||
* [vlive:channel] Limit number of videos per page to 100 (#13830)
|
||||
* [podomatic] Extend URL regular expression (#13827)
|
||||
* [cinchcast] Extend URL regular expression
|
||||
* [yandexdisk] Relax URL regular expression (#13824)
|
||||
* [vidme] Extract DASH and HLS formats
|
||||
- [teamfour] Remove extractor (#13782)
|
||||
* [pornhd] Fix extraction (#13783)
|
||||
* [udemy] Fix subtitles extraction (#13812)
|
||||
* [mlb] Extend URL regular expression (#13740, #13773)
|
||||
+ [pbs] Add support for new URL schema (#13801)
|
||||
* [nrktv] Update API host (#13796)
|
||||
|
||||
|
||||
version 2017.07.30.1
|
||||
|
||||
Core
|
||||
* [downloader/hls] Use redirect URL as manifest base (#13755)
|
||||
* [options] Correctly hide login info from debug outputs (#13696)
|
||||
|
||||
Extractors
|
||||
+ [watchbox] Add support for watchbox.de (#13739)
|
||||
- [clipfish] Remove extractor
|
||||
+ [youjizz] Fix extraction (#13744)
|
||||
+ [generic] Add support for another ooyala embed pattern (#13727)
|
||||
+ [ard] Add support for lives (#13771)
|
||||
* [soundcloud] Update client id
|
||||
+ [soundcloud:trackstation] Add support for track stations (#13733)
|
||||
* [svtplay] Use geo verification proxy for API request
|
||||
* [svtplay] Update API URL (#13767)
|
||||
+ [yandexdisk] Add support for yadi.sk (#13755)
|
||||
+ [megaphone] Add support for megaphone.fm
|
||||
* [amcnetworks] Make rating optional (#12453)
|
||||
* [cloudy] Fix extraction (#13737)
|
||||
+ [nickru] Add support for nickelodeon.ru
|
||||
* [mtv] Improve thumbnal extraction
|
||||
* [nick] Automate geo-restriction bypass (#13711)
|
||||
* [niconico] Improve error reporting (#13696)
|
||||
|
||||
|
||||
version 2017.07.23
|
||||
|
||||
Core
|
||||
* [YoutubeDL] Improve default format specification (#13704)
|
||||
* [YoutubeDL] Do not override id, extractor and extractor_key for
|
||||
url_transparent entities
|
||||
* [extractor/common] Fix playlist_from_matches
|
||||
|
||||
Extractors
|
||||
* [itv] Fix production id extraction (#13671, #13703)
|
||||
* [vidio] Make duration non fatal and fix typo
|
||||
* [mtv] Skip missing video parts (#13690)
|
||||
* [sportbox:embed] Fix extraction
|
||||
+ [npo] Add support for npo3.nl URLs (#13695)
|
||||
* [dramafever] Remove video id from title (#13699)
|
||||
+ [egghead:lesson] Add support for lessons (#6635)
|
||||
* [funnyordie] Extract more metadata (#13677)
|
||||
* [youku:show] Fix playlist extraction (#13248)
|
||||
+ [dispeak] Recognize sevt subdomain (#13276)
|
||||
* [adn] Improve error reporting (#13663)
|
||||
* [crunchyroll] Relax series and season regex (#13659)
|
||||
+ [spiegel:article] Add support for nexx iframe embeds (#13029)
|
||||
+ [nexx:embed] Add support for iframe embeds
|
||||
* [nexx] Improve JS embed extraction
|
||||
+ [pearvideo] Add support for pearvideo.com (#13031)
|
||||
|
||||
|
||||
version 2017.07.15
|
||||
|
||||
Core
|
||||
* [YoutubeDL] Don't expand environment variables in meta fields (#13637)
|
||||
|
||||
Extractors
|
||||
* [spiegeltv] Delegate extraction to nexx extractor (#13159)
|
||||
+ [nexx] Add support for nexx.cloud (#10807, #13465)
|
||||
* [generic] Fix rutube embeds extraction (#13641)
|
||||
* [karrierevideos] Fix title extraction (#13641)
|
||||
* [youtube] Don't capture YouTube Red ad for creator meta field (#13621)
|
||||
* [slideshare] Fix extraction (#13617)
|
||||
+ [5tv] Add another video URL pattern (#13354, #13606)
|
||||
* [drtv] Make HLS and HDS extraction non fatal
|
||||
* [ted] Fix subtitles extraction (#13628, #13629)
|
||||
* [vine] Make sure the title won't be empty
|
||||
+ [twitter] Support HLS streams in vmap URLs
|
||||
+ [periscope] Support pscp.tv URLs in embedded frames
|
||||
* [twitter] Extract mp4 urls via mobile API (#12726)
|
||||
* [niconico] Fix authentication error handling (#12486)
|
||||
* [giantbomb] Extract m3u8 formats (#13626)
|
||||
+ [vlive:playlist] Add support for playlists (#13613)
|
||||
|
||||
|
||||
version 2017.07.09
|
||||
|
11
Makefile
11
Makefile
@ -46,8 +46,15 @@ tar: youtube-dl.tar.gz
|
||||
pypi-files: youtube-dl.bash-completion README.txt youtube-dl.1 youtube-dl.fish
|
||||
|
||||
youtube-dl: youtube_dl/*.py youtube_dl/*/*.py
|
||||
zip --quiet youtube-dl youtube_dl/*.py youtube_dl/*/*.py
|
||||
zip --quiet --junk-paths youtube-dl youtube_dl/__main__.py
|
||||
mkdir -p zip
|
||||
for d in youtube_dl youtube_dl/downloader youtube_dl/extractor youtube_dl/postprocessor ; do \
|
||||
mkdir -p zip/$$d ;\
|
||||
cp -a $$d/*.py zip/$$d/ ;\
|
||||
done
|
||||
touch -t 200001010101 zip/youtube_dl/*.py zip/youtube_dl/*/*.py
|
||||
mv zip/youtube_dl/__main__.py zip/
|
||||
cd zip ; zip --quiet ../youtube-dl youtube_dl/*.py youtube_dl/*/*.py __main__.py
|
||||
rm -rf zip
|
||||
echo '#!$(PYTHON)' > youtube-dl
|
||||
cat youtube-dl.zip >> youtube-dl
|
||||
rm youtube-dl.zip
|
||||
|
54
README.md
54
README.md
@ -25,7 +25,7 @@ If you do not have curl, you can alternatively use a recent wget:
|
||||
sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl
|
||||
sudo chmod a+rx /usr/local/bin/youtube-dl
|
||||
|
||||
Windows users can [download an .exe file](https://yt-dl.org/latest/youtube-dl.exe) and place it in any location on their [PATH](http://en.wikipedia.org/wiki/PATH_%28variable%29) except for `%SYSTEMROOT%\System32` (e.g. **do not** put in `C:\Windows\System32`).
|
||||
Windows users can [download an .exe file](https://yt-dl.org/latest/youtube-dl.exe) and place it in any location on their [PATH](https://en.wikipedia.org/wiki/PATH_%28variable%29) except for `%SYSTEMROOT%\System32` (e.g. **do not** put in `C:\Windows\System32`).
|
||||
|
||||
You can also use pip:
|
||||
|
||||
@ -33,7 +33,7 @@ You can also use pip:
|
||||
|
||||
This command will update youtube-dl if you have already installed it. See the [pypi page](https://pypi.python.org/pypi/youtube_dl) for more information.
|
||||
|
||||
OS X users can install youtube-dl with [Homebrew](http://brew.sh/):
|
||||
OS X users can install youtube-dl with [Homebrew](https://brew.sh/):
|
||||
|
||||
brew install youtube-dl
|
||||
|
||||
@ -458,7 +458,7 @@ You can also use `--config-location` if you want to use custom configuration fil
|
||||
|
||||
### Authentication with `.netrc` file
|
||||
|
||||
You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every youtube-dl execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](http://stackoverflow.com/tags/.netrc/info) on a per extractor basis. For that you will need to create a `.netrc` file in your `$HOME` and restrict permissions to read/write by only you:
|
||||
You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every youtube-dl execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](https://stackoverflow.com/tags/.netrc/info) on a per extractor basis. For that you will need to create a `.netrc` file in your `$HOME` and restrict permissions to read/write by only you:
|
||||
```
|
||||
touch $HOME/.netrc
|
||||
chmod a-rwx,u+rw $HOME/.netrc
|
||||
@ -485,7 +485,7 @@ The `-o` option allows users to indicate a template for the output file names.
|
||||
|
||||
**tl;dr:** [navigate me to examples](#output-template-examples).
|
||||
|
||||
The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "http://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences may be formatted according to [python string formatting operations](https://docs.python.org/2/library/stdtypes.html#string-formatting). For example, `%(NAME)s` or `%(NAME)05d`. To clarify, that is a percent symbol followed by a name in parentheses, followed by a formatting operations. Allowed names along with sequence type are:
|
||||
The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "https://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences may be formatted according to [python string formatting operations](https://docs.python.org/2/library/stdtypes.html#string-formatting). For example, `%(NAME)s` or `%(NAME)05d`. To clarify, that is a percent symbol followed by a name in parentheses, followed by a formatting operations. Allowed names along with sequence type are:
|
||||
|
||||
- `id` (string): Video identifier
|
||||
- `title` (string): Video title
|
||||
@ -584,7 +584,7 @@ If you are using an output template inside a Windows batch file then you must es
|
||||
|
||||
#### Output template examples
|
||||
|
||||
Note on Windows you may need to use double quotes instead of single.
|
||||
Note that on Windows you may need to use double quotes instead of single.
|
||||
|
||||
```bash
|
||||
$ youtube-dl --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc
|
||||
@ -603,7 +603,7 @@ $ youtube-dl -o '%(uploader)s/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)
|
||||
$ youtube-dl -u user -p password -o '~/MyVideos/%(playlist)s/%(chapter_number)s - %(chapter)s/%(title)s.%(ext)s' https://www.udemy.com/java-tutorial/
|
||||
|
||||
# Download entire series season keeping each series and each season in separate directory under C:/MyVideos
|
||||
$ youtube-dl -o "C:/MyVideos/%(series)s/%(season_number)s - %(season)s/%(episode_number)s - %(episode)s.%(ext)s" http://videomore.ru/kino_v_detalayah/5_sezon/367617
|
||||
$ youtube-dl -o "C:/MyVideos/%(series)s/%(season_number)s - %(season)s/%(episode_number)s - %(episode)s.%(ext)s" https://videomore.ru/kino_v_detalayah/5_sezon/367617
|
||||
|
||||
# Stream the video being downloaded to stdout
|
||||
$ youtube-dl -o - BaW_jenozKc
|
||||
@ -671,7 +671,7 @@ If you want to preserve the old format selection behavior (prior to youtube-dl 2
|
||||
|
||||
#### Format selection examples
|
||||
|
||||
Note on Windows you may need to use double quotes instead of single.
|
||||
Note that on Windows you may need to use double quotes instead of single.
|
||||
|
||||
```bash
|
||||
# Download best mp4 format available or any other best if no mp4 available
|
||||
@ -716,17 +716,17 @@ $ youtube-dl --dateafter 20000101 --datebefore 20091231
|
||||
|
||||
### How do I update youtube-dl?
|
||||
|
||||
If you've followed [our manual installation instructions](http://rg3.github.io/youtube-dl/download.html), you can simply run `youtube-dl -U` (or, on Linux, `sudo youtube-dl -U`).
|
||||
If you've followed [our manual installation instructions](https://rg3.github.io/youtube-dl/download.html), you can simply run `youtube-dl -U` (or, on Linux, `sudo youtube-dl -U`).
|
||||
|
||||
If you have used pip, a simple `sudo pip install -U youtube-dl` is sufficient to update.
|
||||
|
||||
If you have installed youtube-dl using a package manager like *apt-get* or *yum*, use the standard system update mechanism to update. Note that distribution packages are often outdated. As a rule of thumb, youtube-dl releases at least once a month, and often weekly or even daily. Simply go to http://yt-dl.org/ to find out the current version. Unfortunately, there is nothing we youtube-dl developers can do if your distribution serves a really outdated version. You can (and should) complain to your distribution in their bugtracker or support forum.
|
||||
If you have installed youtube-dl using a package manager like *apt-get* or *yum*, use the standard system update mechanism to update. Note that distribution packages are often outdated. As a rule of thumb, youtube-dl releases at least once a month, and often weekly or even daily. Simply go to https://yt-dl.org to find out the current version. Unfortunately, there is nothing we youtube-dl developers can do if your distribution serves a really outdated version. You can (and should) complain to your distribution in their bugtracker or support forum.
|
||||
|
||||
As a last resort, you can also uninstall the version installed by your package manager and follow our manual installation instructions. For that, remove the distribution's package, with a line like
|
||||
|
||||
sudo apt-get remove -y youtube-dl
|
||||
|
||||
Afterwards, simply follow [our manual installation instructions](http://rg3.github.io/youtube-dl/download.html):
|
||||
Afterwards, simply follow [our manual installation instructions](https://rg3.github.io/youtube-dl/download.html):
|
||||
|
||||
```
|
||||
sudo wget https://yt-dl.org/latest/youtube-dl -O /usr/local/bin/youtube-dl
|
||||
@ -766,11 +766,11 @@ Apparently YouTube requires you to pass a CAPTCHA test if you download too much.
|
||||
|
||||
youtube-dl works fine on its own on most sites. However, if you want to convert video/audio, you'll need [avconv](https://libav.org/) or [ffmpeg](https://www.ffmpeg.org/). On some sites - most notably YouTube - videos can be retrieved in a higher quality format without sound. youtube-dl will detect whether avconv/ffmpeg is present and automatically pick the best option.
|
||||
|
||||
Videos or video formats streamed via RTMP protocol can only be downloaded when [rtmpdump](https://rtmpdump.mplayerhq.hu/) is installed. Downloading MMS and RTSP videos requires either [mplayer](http://mplayerhq.hu/) or [mpv](https://mpv.io/) to be installed.
|
||||
Videos or video formats streamed via RTMP protocol can only be downloaded when [rtmpdump](https://rtmpdump.mplayerhq.hu/) is installed. Downloading MMS and RTSP videos requires either [mplayer](https://mplayerhq.hu/) or [mpv](https://mpv.io/) to be installed.
|
||||
|
||||
### I have downloaded a video but how can I play it?
|
||||
|
||||
Once the video is fully downloaded, use any video player, such as [mpv](https://mpv.io/), [vlc](http://www.videolan.org/) or [mplayer](http://www.mplayerhq.hu/).
|
||||
Once the video is fully downloaded, use any video player, such as [mpv](https://mpv.io/), [vlc](https://www.videolan.org/) or [mplayer](https://www.mplayerhq.hu/).
|
||||
|
||||
### I extracted a video URL with `-g`, but it does not play on another machine / in my web browser.
|
||||
|
||||
@ -845,10 +845,10 @@ Use the `-o` to specify an [output template](#output-template), for example `-o
|
||||
|
||||
### How do I download a video starting with a `-`?
|
||||
|
||||
Either prepend `http://www.youtube.com/watch?v=` or separate the ID from the options with `--`:
|
||||
Either prepend `https://www.youtube.com/watch?v=` or separate the ID from the options with `--`:
|
||||
|
||||
youtube-dl -- -wNyEUrxzFU
|
||||
youtube-dl "http://www.youtube.com/watch?v=-wNyEUrxzFU"
|
||||
youtube-dl "https://www.youtube.com/watch?v=-wNyEUrxzFU"
|
||||
|
||||
### How do I pass cookies to youtube-dl?
|
||||
|
||||
@ -862,9 +862,9 @@ Passing cookies to youtube-dl is a good way to workaround login when a particula
|
||||
|
||||
### How do I stream directly to media player?
|
||||
|
||||
You will first need to tell youtube-dl to stream media to stdout with `-o -`, and also tell your media player to read from stdin (it must be capable of this for streaming) and then pipe former to latter. For example, streaming to [vlc](http://www.videolan.org/) can be achieved with:
|
||||
You will first need to tell youtube-dl to stream media to stdout with `-o -`, and also tell your media player to read from stdin (it must be capable of this for streaming) and then pipe former to latter. For example, streaming to [vlc](https://www.videolan.org/) can be achieved with:
|
||||
|
||||
youtube-dl -o - "http://www.youtube.com/watch?v=BaW_jenozKcj" | vlc -
|
||||
youtube-dl -o - "https://www.youtube.com/watch?v=BaW_jenozKcj" | vlc -
|
||||
|
||||
### How do I download only new videos from a playlist?
|
||||
|
||||
@ -884,7 +884,7 @@ When youtube-dl detects an HLS video, it can download it either with the built-i
|
||||
|
||||
When youtube-dl knows that one particular downloader works better for a given website, that downloader will be picked. Otherwise, youtube-dl will pick the best downloader for general compatibility, which at the moment happens to be ffmpeg. This choice may change in future versions of youtube-dl, with improvements of the built-in downloader and/or ffmpeg.
|
||||
|
||||
In particular, the generic extractor (used when your website is not in the [list of supported sites by youtube-dl](http://rg3.github.io/youtube-dl/supportedsites.html) cannot mandate one specific downloader.
|
||||
In particular, the generic extractor (used when your website is not in the [list of supported sites by youtube-dl](https://rg3.github.io/youtube-dl/supportedsites.html) cannot mandate one specific downloader.
|
||||
|
||||
If you put either `--hls-prefer-native` or `--hls-prefer-ffmpeg` into your configuration, a different subset of videos will fail to download correctly. Instead, it is much better to [file an issue](https://yt-dl.org/bug) or a pull request which details why the native or the ffmpeg HLS downloader is a better choice for your use case.
|
||||
|
||||
@ -910,7 +910,7 @@ Feel free to bump the issue from time to time by writing a small comment ("Issue
|
||||
|
||||
### How can I detect whether a given URL is supported by youtube-dl?
|
||||
|
||||
For one, have a look at the [list of supported sites](docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from http://example.com/video/1234567 to http://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug.
|
||||
For one, have a look at the [list of supported sites](docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from https://example.com/video/1234567 to https://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug.
|
||||
|
||||
It is *not* possible to detect whether a URL is supported or not. That's because youtube-dl contains a generic extractor which matches **all** URLs. You may be tempted to disable, exclude, or remove the generic extractor, but the generic extractor not only allows users to extract videos from lots of websites that embed a video from another service, but may also be used to extract video from a service that it's hosting itself. Therefore, we neither recommend nor support disabling, excluding, or removing the generic extractor.
|
||||
|
||||
@ -924,7 +924,7 @@ youtube-dl is an open-source project manned by too few volunteers, so we'd rathe
|
||||
|
||||
# DEVELOPER INSTRUCTIONS
|
||||
|
||||
Most users do not need to build youtube-dl and can [download the builds](http://rg3.github.io/youtube-dl/download.html) or get them from their distribution.
|
||||
Most users do not need to build youtube-dl and can [download the builds](https://rg3.github.io/youtube-dl/download.html) or get them from their distribution.
|
||||
|
||||
To run youtube-dl as a developer, you don't need to build anything either. Simply execute
|
||||
|
||||
@ -972,7 +972,7 @@ After you have ensured this site is distributing its content legally, you can fo
|
||||
class YourExtractorIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
|
||||
_TEST = {
|
||||
'url': 'http://yourextractor.com/watch/42',
|
||||
'url': 'https://yourextractor.com/watch/42',
|
||||
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
|
||||
'info_dict': {
|
||||
'id': '42',
|
||||
@ -1005,8 +1005,8 @@ After you have ensured this site is distributing its content legally, you can fo
|
||||
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
|
||||
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc.
|
||||
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L74-L252). Add tests and code for as many as you want.
|
||||
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://pypi.python.org/pypi/flake8). Also make sure your code works under all [Python](http://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+.
|
||||
9. When the tests pass, [add](http://git-scm.com/docs/git-add) the new files and [commit](http://git-scm.com/docs/git-commit) them and [push](http://git-scm.com/docs/git-push) the result, like this:
|
||||
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://pypi.python.org/pypi/flake8). Also make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+.
|
||||
9. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
|
||||
|
||||
$ git add youtube_dl/extractor/extractors.py
|
||||
$ git add youtube_dl/extractor/yourextractor.py
|
||||
@ -1162,7 +1162,7 @@ import youtube_dl
|
||||
|
||||
ydl_opts = {}
|
||||
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
|
||||
ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
|
||||
ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
|
||||
```
|
||||
|
||||
Most likely, you'll want to use various options. For a list of options available, have a look at [`youtube_dl/YoutubeDL.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L129-L279). For a start, if you want to intercept youtube-dl's output, set a `logger` object.
|
||||
@ -1201,19 +1201,19 @@ ydl_opts = {
|
||||
'progress_hooks': [my_hook],
|
||||
}
|
||||
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
|
||||
ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
|
||||
ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
|
||||
```
|
||||
|
||||
# BUGS
|
||||
|
||||
Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues>. Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the IRC channel [#youtube-dl](irc://chat.freenode.net/#youtube-dl) on freenode ([webchat](http://webchat.freenode.net/?randomnick=1&channels=youtube-dl)).
|
||||
Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues>. Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the IRC channel [#youtube-dl](irc://chat.freenode.net/#youtube-dl) on freenode ([webchat](https://webchat.freenode.net/?randomnick=1&channels=youtube-dl)).
|
||||
|
||||
**Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
|
||||
```
|
||||
$ youtube-dl -v <your command line>
|
||||
[debug] System config: []
|
||||
[debug] User config: []
|
||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||
[debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
||||
[debug] youtube-dl version 2015.12.06
|
||||
[debug] Git HEAD: 135392e
|
||||
@ -1244,7 +1244,7 @@ For bug reports, this means that your report should contain the *complete* outpu
|
||||
|
||||
If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/).
|
||||
|
||||
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `http://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `http://www.youtube.com/`) is *not* an example URL.
|
||||
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
|
||||
|
||||
### Are you using the latest version?
|
||||
|
||||
|
@ -42,7 +42,7 @@
|
||||
- **Allocine**
|
||||
- **AlphaPorno**
|
||||
- **AMCNetworks**
|
||||
- **anderetijden**: npo.nl and ntr.nl
|
||||
- **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
|
||||
- **AnimeOnDemand**
|
||||
- **anitube.se**
|
||||
- **Anvato**
|
||||
@ -155,8 +155,8 @@
|
||||
- **chirbit:profile**
|
||||
- **Cinchcast**
|
||||
- **CJSW**
|
||||
- **Clipfish**
|
||||
- **cliphunter**
|
||||
- **Clippit**
|
||||
- **ClipRs**
|
||||
- **Clipsyndicate**
|
||||
- **CloserToTruth**
|
||||
@ -238,6 +238,7 @@
|
||||
- **EbaumsWorld**
|
||||
- **EchoMsk**
|
||||
- **egghead:course**: egghead.io course
|
||||
- **egghead:lesson**: egghead.io lesson
|
||||
- **eHow**
|
||||
- **Einthusan**
|
||||
- **eitb.tv**
|
||||
@ -294,6 +295,7 @@
|
||||
- **Funimation**
|
||||
- **FunnyOrDie**
|
||||
- **Fusion**
|
||||
- **Fux**
|
||||
- **FXNetworks**
|
||||
- **GameInformer**
|
||||
- **GameOne**
|
||||
@ -439,6 +441,7 @@
|
||||
- **Medialaan**
|
||||
- **Mediaset**
|
||||
- **Medici**
|
||||
- **megaphone.fm**: megaphone.fm embedded players
|
||||
- **Meipai**: 美拍
|
||||
- **MelonVOD**
|
||||
- **META**
|
||||
@ -471,7 +474,6 @@
|
||||
- **MovieFap**
|
||||
- **Moviezine**
|
||||
- **MovingImage**
|
||||
- **MPORA**
|
||||
- **MSN**
|
||||
- **mtg**: MTG services
|
||||
- **mtv**
|
||||
@ -521,6 +523,8 @@
|
||||
- **NextMedia**: 蘋果日報
|
||||
- **NextMediaActionNews**: 蘋果日報 - 動新聞
|
||||
- **NextTV**: 壹電視
|
||||
- **Nexx**
|
||||
- **NexxEmbed**
|
||||
- **nfb**: National Film Board of Canada
|
||||
- **nfl.com**
|
||||
- **NhkVod**
|
||||
@ -530,6 +534,7 @@
|
||||
- **nhl.com:videocenter:category**: NHL videocenter category
|
||||
- **nick.com**
|
||||
- **nick.de**
|
||||
- **nickelodeonru**
|
||||
- **nicknight**
|
||||
- **niconico**: ニコニコ動画
|
||||
- **NiconicoPlaylist**
|
||||
@ -551,7 +556,7 @@
|
||||
- **NowTVList**
|
||||
- **nowvideo**: NowVideo
|
||||
- **Noz**
|
||||
- **npo**: npo.nl and ntr.nl
|
||||
- **npo**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
|
||||
- **npo.nl:live**
|
||||
- **npo.nl:radio**
|
||||
- **npo.nl:radio:fragment**
|
||||
@ -595,6 +600,7 @@
|
||||
- **Patreon**
|
||||
- **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC)
|
||||
- **pcmag**
|
||||
- **PearVideo**
|
||||
- **People**
|
||||
- **periscope**: Periscope
|
||||
- **periscope:user**: Periscope user videos
|
||||
@ -617,6 +623,7 @@
|
||||
- **PolskieRadio**
|
||||
- **PolskieRadioCategory**
|
||||
- **PornCom**
|
||||
- **PornerBros**
|
||||
- **PornFlip**
|
||||
- **PornHd**
|
||||
- **PornHub**: PornHub and Thumbzilla
|
||||
@ -625,6 +632,7 @@
|
||||
- **Pornotube**
|
||||
- **PornoVoisines**
|
||||
- **PornoXO**
|
||||
- **PornTube**
|
||||
- **PressTV**
|
||||
- **PrimeShareTV**
|
||||
- **PromptFile**
|
||||
@ -650,6 +658,8 @@
|
||||
- **RBMARadio**
|
||||
- **RDS**: RDS.ca
|
||||
- **RedBullTV**
|
||||
- **Reddit**
|
||||
- **RedditR**
|
||||
- **RedTube**
|
||||
- **RegioTV**
|
||||
- **RENTV**
|
||||
@ -730,6 +740,7 @@
|
||||
- **soundcloud:playlist**
|
||||
- **soundcloud:search**: Soundcloud search
|
||||
- **soundcloud:set**
|
||||
- **soundcloud:trackstation**
|
||||
- **soundcloud:user**
|
||||
- **soundgasm**
|
||||
- **soundgasm:profile**
|
||||
@ -771,13 +782,12 @@
|
||||
- **tagesschau:player**
|
||||
- **Tass**
|
||||
- **TastyTrade**
|
||||
- **TBS**
|
||||
- **TBS** (Currently broken)
|
||||
- **TDSLifeway**
|
||||
- **teachertube**: teachertube.com videos
|
||||
- **teachertube:user:collection**: teachertube.com user and collection videos
|
||||
- **TeachingChannel**
|
||||
- **Teamcoco**
|
||||
- **TeamFourStar**
|
||||
- **TechTalks**
|
||||
- **techtv.mit.edu**
|
||||
- **ted**
|
||||
@ -942,13 +952,15 @@
|
||||
- **vk:wallpost**
|
||||
- **vlive**
|
||||
- **vlive:channel**
|
||||
- **vlive:playlist**
|
||||
- **Vodlocker**
|
||||
- **VODPl**
|
||||
- **VODPlatform**
|
||||
- **VoiceRepublic**
|
||||
- **Voot**
|
||||
- **VoxMedia**
|
||||
- **Vporn**
|
||||
- **vpro**: npo.nl and ntr.nl
|
||||
- **vpro**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
|
||||
- **Vrak**
|
||||
- **VRT**: deredactie.be, sporza.be, cobra.be and cobra.canvas.be
|
||||
- **vrv**
|
||||
@ -963,6 +975,7 @@
|
||||
- **washingtonpost**
|
||||
- **washingtonpost:article**
|
||||
- **wat.tv**
|
||||
- **WatchBox**
|
||||
- **WatchIndianPorn**: Watch Indian Porn
|
||||
- **WDR**
|
||||
- **wdr:mobile**
|
||||
@ -974,7 +987,7 @@
|
||||
- **wholecloud**: WholeCloud
|
||||
- **Wimp**
|
||||
- **Wistia**
|
||||
- **wnl**: npo.nl and ntr.nl
|
||||
- **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
|
||||
- **WorldStarHipHop**
|
||||
- **wrzuta.pl**
|
||||
- **wrzuta.pl:playlist**
|
||||
@ -998,6 +1011,7 @@
|
||||
- **XVideos**
|
||||
- **XXXYMovies**
|
||||
- **Yahoo**: Yahoo screen and movies
|
||||
- **YandexDisk**
|
||||
- **yandexmusic:album**: Яндекс.Музыка - Альбом
|
||||
- **yandexmusic:playlist**: Яндекс.Музыка - Плейлист
|
||||
- **yandexmusic:track**: Яндекс.Музыка - Трек
|
||||
|
@ -10,6 +10,7 @@ import unittest
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from test.helper import FakeYDL, expect_dict, expect_value
|
||||
from youtube_dl.compat import compat_etree_fromstring
|
||||
from youtube_dl.extractor.common import InfoExtractor
|
||||
from youtube_dl.extractor import YoutubeIE, get_info_extractor
|
||||
from youtube_dl.utils import encode_data_uri, strip_jsonp, ExtractorError, RegexNotFoundError
|
||||
@ -488,6 +489,91 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
|
||||
self.ie._sort_formats(formats)
|
||||
expect_value(self, formats, expected_formats, None)
|
||||
|
||||
def test_parse_mpd_formats(self):
|
||||
_TEST_CASES = [
|
||||
(
|
||||
# https://github.com/rg3/youtube-dl/issues/13919
|
||||
'float_duration',
|
||||
'http://unknown/manifest.mpd',
|
||||
[{
|
||||
'manifest_url': 'http://unknown/manifest.mpd',
|
||||
'ext': 'mp4',
|
||||
'format_id': '318597',
|
||||
'format_note': 'DASH video',
|
||||
'protocol': 'http_dash_segments',
|
||||
'acodec': 'none',
|
||||
'vcodec': 'avc1.42001f',
|
||||
'tbr': 318.597,
|
||||
'width': 340,
|
||||
'height': 192,
|
||||
}, {
|
||||
'manifest_url': 'http://unknown/manifest.mpd',
|
||||
'ext': 'mp4',
|
||||
'format_id': '638590',
|
||||
'format_note': 'DASH video',
|
||||
'protocol': 'http_dash_segments',
|
||||
'acodec': 'none',
|
||||
'vcodec': 'avc1.42001f',
|
||||
'tbr': 638.59,
|
||||
'width': 512,
|
||||
'height': 288,
|
||||
}, {
|
||||
'manifest_url': 'http://unknown/manifest.mpd',
|
||||
'ext': 'mp4',
|
||||
'format_id': '1022565',
|
||||
'format_note': 'DASH video',
|
||||
'protocol': 'http_dash_segments',
|
||||
'acodec': 'none',
|
||||
'vcodec': 'avc1.4d001f',
|
||||
'tbr': 1022.565,
|
||||
'width': 688,
|
||||
'height': 384,
|
||||
}, {
|
||||
'manifest_url': 'http://unknown/manifest.mpd',
|
||||
'ext': 'mp4',
|
||||
'format_id': '2046506',
|
||||
'format_note': 'DASH video',
|
||||
'protocol': 'http_dash_segments',
|
||||
'acodec': 'none',
|
||||
'vcodec': 'avc1.4d001f',
|
||||
'tbr': 2046.506,
|
||||
'width': 1024,
|
||||
'height': 576,
|
||||
}, {
|
||||
'manifest_url': 'http://unknown/manifest.mpd',
|
||||
'ext': 'mp4',
|
||||
'format_id': '3998017',
|
||||
'format_note': 'DASH video',
|
||||
'protocol': 'http_dash_segments',
|
||||
'acodec': 'none',
|
||||
'vcodec': 'avc1.640029',
|
||||
'tbr': 3998.017,
|
||||
'width': 1280,
|
||||
'height': 720,
|
||||
}, {
|
||||
'manifest_url': 'http://unknown/manifest.mpd',
|
||||
'ext': 'mp4',
|
||||
'format_id': '5997485',
|
||||
'format_note': 'DASH video',
|
||||
'protocol': 'http_dash_segments',
|
||||
'acodec': 'none',
|
||||
'vcodec': 'avc1.640032',
|
||||
'tbr': 5997.485,
|
||||
'width': 1920,
|
||||
'height': 1080,
|
||||
}]
|
||||
),
|
||||
]
|
||||
|
||||
for mpd_file, mpd_url, expected_formats in _TEST_CASES:
|
||||
with io.open('./test/testdata/mpd/%s.mpd' % mpd_file,
|
||||
mode='r', encoding='utf-8') as f:
|
||||
formats = self.ie._parse_mpd_formats(
|
||||
compat_etree_fromstring(f.read().encode('utf-8')),
|
||||
mpd_url=mpd_url)
|
||||
self.ie._sort_formats(formats)
|
||||
expect_value(self, formats, expected_formats, None)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
@ -41,6 +41,7 @@ def _make_result(formats, **kwargs):
|
||||
'id': 'testid',
|
||||
'title': 'testttitle',
|
||||
'extractor': 'testex',
|
||||
'extractor_key': 'TestEx',
|
||||
}
|
||||
res.update(**kwargs)
|
||||
return res
|
||||
@ -370,6 +371,19 @@ class TestFormatSelection(unittest.TestCase):
|
||||
ydl = YDL({'format': 'best[height>360]'})
|
||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
||||
|
||||
def test_format_selection_issue_10083(self):
|
||||
# See https://github.com/rg3/youtube-dl/issues/10083
|
||||
formats = [
|
||||
{'format_id': 'regular', 'height': 360, 'url': TEST_URL},
|
||||
{'format_id': 'video', 'height': 720, 'acodec': 'none', 'url': TEST_URL},
|
||||
{'format_id': 'audio', 'vcodec': 'none', 'url': TEST_URL},
|
||||
]
|
||||
info_dict = _make_result(formats)
|
||||
|
||||
ydl = YDL({'format': 'best[height>360]/bestvideo[height>360]+bestaudio'})
|
||||
ydl.process_ie_result(info_dict.copy())
|
||||
self.assertEqual(ydl.downloaded_info_dicts[0]['format_id'], 'video+audio')
|
||||
|
||||
def test_invalid_format_specs(self):
|
||||
def assert_syntax_error(format_spec):
|
||||
ydl = YDL({'format': format_spec})
|
||||
@ -448,6 +462,17 @@ class TestFormatSelection(unittest.TestCase):
|
||||
pass
|
||||
self.assertEqual(ydl.downloaded_info_dicts, [])
|
||||
|
||||
def test_default_format_spec(self):
|
||||
ydl = YDL({'simulate': True})
|
||||
self.assertEqual(ydl._default_format_spec({}), 'bestvideo+bestaudio/best')
|
||||
|
||||
ydl = YDL({'outtmpl': '-'})
|
||||
self.assertEqual(ydl._default_format_spec({}), 'best')
|
||||
|
||||
ydl = YDL({})
|
||||
self.assertEqual(ydl._default_format_spec({}, download=False), 'bestvideo+bestaudio/best')
|
||||
self.assertEqual(ydl._default_format_spec({'is_live': True}), 'best')
|
||||
|
||||
|
||||
class TestYoutubeDL(unittest.TestCase):
|
||||
def test_subtitles(self):
|
||||
@ -527,6 +552,8 @@ class TestYoutubeDL(unittest.TestCase):
|
||||
'ext': 'mp4',
|
||||
'width': None,
|
||||
'height': 1080,
|
||||
'title1': '$PATH',
|
||||
'title2': '%PATH%',
|
||||
}
|
||||
|
||||
def fname(templ):
|
||||
@ -545,10 +572,14 @@ class TestYoutubeDL(unittest.TestCase):
|
||||
self.assertEqual(fname('%(height)0 6d.%(ext)s'), ' 01080.mp4')
|
||||
self.assertEqual(fname('%(height)0 6d.%(ext)s'), ' 01080.mp4')
|
||||
self.assertEqual(fname('%(height) 0 6d.%(ext)s'), ' 01080.mp4')
|
||||
self.assertEqual(fname('%%'), '%')
|
||||
self.assertEqual(fname('%%%%'), '%%')
|
||||
self.assertEqual(fname('%%(height)06d.%(ext)s'), '%(height)06d.mp4')
|
||||
self.assertEqual(fname('%(width)06d.%(ext)s'), 'NA.mp4')
|
||||
self.assertEqual(fname('%(width)06d.%%(ext)s'), 'NA.%(ext)s')
|
||||
self.assertEqual(fname('%%(width)06d.%(ext)s'), '%(width)06d.mp4')
|
||||
self.assertEqual(fname('Hello %(title1)s'), 'Hello $PATH')
|
||||
self.assertEqual(fname('Hello %(title2)s'), 'Hello %PATH%')
|
||||
|
||||
def test_format_note(self):
|
||||
ydl = YoutubeDL()
|
||||
@ -755,7 +786,8 @@ class TestYoutubeDL(unittest.TestCase):
|
||||
'_type': 'url_transparent',
|
||||
'url': 'foo2:',
|
||||
'ie_key': 'Foo2',
|
||||
'title': 'foo1 title'
|
||||
'title': 'foo1 title',
|
||||
'id': 'foo1_id',
|
||||
}
|
||||
|
||||
class Foo2IE(InfoExtractor):
|
||||
@ -781,6 +813,9 @@ class TestYoutubeDL(unittest.TestCase):
|
||||
downloaded = ydl.downloaded_info_dicts[0]
|
||||
self.assertEqual(downloaded['url'], TEST_URL)
|
||||
self.assertEqual(downloaded['title'], 'foo1 title')
|
||||
self.assertEqual(downloaded['id'], 'testid')
|
||||
self.assertEqual(downloaded['extractor'], 'testex')
|
||||
self.assertEqual(downloaded['extractor_key'], 'TestEx')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
26
test/test_options.py
Normal file
26
test/test_options.py
Normal file
@ -0,0 +1,26 @@
|
||||
# coding: utf-8
|
||||
|
||||
from __future__ import unicode_literals
|
||||
|
||||
# Allow direct execution
|
||||
import os
|
||||
import sys
|
||||
import unittest
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from youtube_dl.options import _hide_login_info
|
||||
|
||||
|
||||
class TestOptions(unittest.TestCase):
|
||||
def test_hide_login_info(self):
|
||||
self.assertEqual(_hide_login_info(['-u', 'foo', '-p', 'bar']),
|
||||
['-u', 'PRIVATE', '-p', 'PRIVATE'])
|
||||
self.assertEqual(_hide_login_info(['-u']), ['-u'])
|
||||
self.assertEqual(_hide_login_info(['-u', 'foo', '-u', 'bar']),
|
||||
['-u', 'PRIVATE', '-u', 'PRIVATE'])
|
||||
self.assertEqual(_hide_login_info(['--username=foo']),
|
||||
['--username=PRIVATE'])
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
@ -279,6 +279,7 @@ class TestUtil(unittest.TestCase):
|
||||
self.assertEqual(unescapeHTML('/'), '/')
|
||||
self.assertEqual(unescapeHTML('é'), 'é')
|
||||
self.assertEqual(unescapeHTML('�'), '�')
|
||||
self.assertEqual(unescapeHTML('&a"'), '&a"')
|
||||
# HTML5 entities
|
||||
self.assertEqual(unescapeHTML('.''), '.\'')
|
||||
|
||||
@ -1182,6 +1183,10 @@ part 3</font></u>
|
||||
cli_bool_option(
|
||||
{'nocheckcertificate': False}, '--check-certificate', 'nocheckcertificate', 'false', 'true', '='),
|
||||
['--check-certificate=true'])
|
||||
self.assertEqual(
|
||||
cli_bool_option(
|
||||
{}, '--check-certificate', 'nocheckcertificate', 'false', 'true', '='),
|
||||
[])
|
||||
|
||||
def test_ohdave_rsa_encrypt(self):
|
||||
N = 0xab86b6371b5318aaa1d3c9e612a9f1264f372323c8c0f19875b5fc3b3fd3afcc1e5bec527aa94bfa85bffc157e4245aebda05389a5357b75115ac94f074aefcd
|
||||
|
18
test/testdata/mpd/float_duration.mpd
vendored
Normal file
18
test/testdata/mpd/float_duration.mpd
vendored
Normal file
@ -0,0 +1,18 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<MPD xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:mpeg:dash:schema:mpd:2011" type="static" minBufferTime="PT2S" profiles="urn:mpeg:dash:profile:isoff-on-demand:2011" mediaPresentationDuration="PT6014S">
|
||||
<Period bitstreamSwitching="true">
|
||||
<AdaptationSet mimeType="audio/mp4" codecs="mp4a.40.2" startWithSAP="1" segmentAlignment="true">
|
||||
<SegmentTemplate timescale="1000000" presentationTimeOffset="0" initialization="ai_$RepresentationID$.mp4d" media="a_$RepresentationID$_$Number$.mp4d" duration="2000000.0" startNumber="0"></SegmentTemplate>
|
||||
<Representation id="318597" bandwidth="61587"></Representation>
|
||||
</AdaptationSet>
|
||||
<AdaptationSet mimeType="video/mp4" startWithSAP="1" segmentAlignment="true">
|
||||
<SegmentTemplate timescale="1000000" presentationTimeOffset="0" initialization="vi_$RepresentationID$.mp4d" media="v_$RepresentationID$_$Number$.mp4d" duration="2000000.0" startNumber="0"></SegmentTemplate>
|
||||
<Representation id="318597" codecs="avc1.42001f" width="340" height="192" bandwidth="318597"></Representation>
|
||||
<Representation id="638590" codecs="avc1.42001f" width="512" height="288" bandwidth="638590"></Representation>
|
||||
<Representation id="1022565" codecs="avc1.4d001f" width="688" height="384" bandwidth="1022565"></Representation>
|
||||
<Representation id="2046506" codecs="avc1.4d001f" width="1024" height="576" bandwidth="2046506"></Representation>
|
||||
<Representation id="3998017" codecs="avc1.640029" width="1280" height="720" bandwidth="3998017"></Representation>
|
||||
<Representation id="5997485" codecs="avc1.640032" width="1920" height="1080" bandwidth="5997485"></Representation>
|
||||
</AdaptationSet>
|
||||
</Period>
|
||||
</MPD>
|
@ -26,6 +26,8 @@ import tokenize
|
||||
import traceback
|
||||
import random
|
||||
|
||||
from string import ascii_letters
|
||||
|
||||
from .compat import (
|
||||
compat_basestring,
|
||||
compat_cookiejar,
|
||||
@ -674,7 +676,19 @@ class YoutubeDL(object):
|
||||
FORMAT_RE.format(numeric_field),
|
||||
r'%({0})s'.format(numeric_field), outtmpl)
|
||||
|
||||
filename = expand_path(outtmpl % template_dict)
|
||||
# expand_path translates '%%' into '%' and '$$' into '$'
|
||||
# correspondingly that is not what we want since we need to keep
|
||||
# '%%' intact for template dict substitution step. Working around
|
||||
# with boundary-alike separator hack.
|
||||
sep = ''.join([random.choice(ascii_letters) for _ in range(32)])
|
||||
outtmpl = outtmpl.replace('%%', '%{0}%'.format(sep)).replace('$$', '${0}$'.format(sep))
|
||||
|
||||
# outtmpl should be expand_path'ed before template dict substitution
|
||||
# because meta fields may contain env variables we don't want to
|
||||
# be expanded. For example, for outtmpl "%(title)s.%(ext)s" and
|
||||
# title "Hello $PATH", we don't want `$PATH` to be expanded.
|
||||
filename = expand_path(outtmpl).replace(sep, '') % template_dict
|
||||
|
||||
# Temporary fix for #4787
|
||||
# 'Treat' all problem characters by passing filename through preferredencoding
|
||||
# to workaround encoding issues with subprocess on python2 @ Windows
|
||||
@ -846,7 +860,7 @@ class YoutubeDL(object):
|
||||
|
||||
force_properties = dict(
|
||||
(k, v) for k, v in ie_result.items() if v is not None)
|
||||
for f in ('_type', 'url', 'ie_key'):
|
||||
for f in ('_type', 'url', 'id', 'extractor', 'extractor_key', 'ie_key'):
|
||||
if f in force_properties:
|
||||
del force_properties[f]
|
||||
new_result = info.copy()
|
||||
@ -1050,6 +1064,25 @@ class YoutubeDL(object):
|
||||
return op(actual_value, comparison_value)
|
||||
return _filter
|
||||
|
||||
def _default_format_spec(self, info_dict, download=True):
|
||||
req_format_list = []
|
||||
|
||||
def can_have_partial_formats():
|
||||
if self.params.get('simulate', False):
|
||||
return True
|
||||
if not download:
|
||||
return True
|
||||
if self.params.get('outtmpl', DEFAULT_OUTTMPL) == '-':
|
||||
return False
|
||||
if info_dict.get('is_live'):
|
||||
return False
|
||||
merger = FFmpegMergerPP(self)
|
||||
return merger.available and merger.can_merge()
|
||||
if can_have_partial_formats():
|
||||
req_format_list.append('bestvideo+bestaudio')
|
||||
req_format_list.append('best')
|
||||
return '/'.join(req_format_list)
|
||||
|
||||
def build_format_selector(self, format_spec):
|
||||
def syntax_error(note, start):
|
||||
message = (
|
||||
@ -1450,12 +1483,14 @@ class YoutubeDL(object):
|
||||
|
||||
def is_wellformed(f):
|
||||
url = f.get('url')
|
||||
valid_url = url and isinstance(url, compat_str)
|
||||
if not valid_url:
|
||||
if not url:
|
||||
self.report_warning(
|
||||
'"url" field is missing or empty - skipping format, '
|
||||
'there is an error in extractor')
|
||||
return valid_url
|
||||
return False
|
||||
if isinstance(url, bytes):
|
||||
sanitize_string_field(f, 'url')
|
||||
return True
|
||||
|
||||
# Filter out malformed formats for better extraction robustness
|
||||
formats = list(filter(is_wellformed, formats))
|
||||
@ -1467,7 +1502,7 @@ class YoutubeDL(object):
|
||||
sanitize_string_field(format, 'format_id')
|
||||
sanitize_numeric_fields(format)
|
||||
format['url'] = sanitize_url(format['url'])
|
||||
if format.get('format_id') is None:
|
||||
if not format.get('format_id'):
|
||||
format['format_id'] = compat_str(i)
|
||||
else:
|
||||
# Sanitize format_id from characters used in format selector expression
|
||||
@ -1520,14 +1555,10 @@ class YoutubeDL(object):
|
||||
|
||||
req_format = self.params.get('format')
|
||||
if req_format is None:
|
||||
req_format_list = []
|
||||
if (self.params.get('outtmpl', DEFAULT_OUTTMPL) != '-' and
|
||||
not info_dict.get('is_live')):
|
||||
merger = FFmpegMergerPP(self)
|
||||
if merger.available and merger.can_merge():
|
||||
req_format_list.append('bestvideo+bestaudio')
|
||||
req_format_list.append('best')
|
||||
req_format = '/'.join(req_format_list)
|
||||
req_format = self._default_format_spec(info_dict, download=download)
|
||||
if self.params.get('verbose'):
|
||||
self.to_stdout('[debug] Default format spec: %s' % req_format)
|
||||
|
||||
format_selector = self.build_format_selector(req_format)
|
||||
|
||||
# While in format selection we may need to have an access to the original
|
||||
|
@ -2,6 +2,7 @@ from __future__ import unicode_literals
|
||||
|
||||
from .fragment import FragmentFD
|
||||
from ..compat import compat_urllib_error
|
||||
from ..utils import urljoin
|
||||
|
||||
|
||||
class DashSegmentsFD(FragmentFD):
|
||||
@ -12,12 +13,13 @@ class DashSegmentsFD(FragmentFD):
|
||||
FD_NAME = 'dashsegments'
|
||||
|
||||
def real_download(self, filename, info_dict):
|
||||
segments = info_dict['fragments'][:1] if self.params.get(
|
||||
fragment_base_url = info_dict.get('fragment_base_url')
|
||||
fragments = info_dict['fragments'][:1] if self.params.get(
|
||||
'test', False) else info_dict['fragments']
|
||||
|
||||
ctx = {
|
||||
'filename': filename,
|
||||
'total_frags': len(segments),
|
||||
'total_frags': len(fragments),
|
||||
}
|
||||
|
||||
self._prepare_and_start_frag_download(ctx)
|
||||
@ -26,7 +28,7 @@ class DashSegmentsFD(FragmentFD):
|
||||
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
|
||||
|
||||
frag_index = 0
|
||||
for i, segment in enumerate(segments):
|
||||
for i, fragment in enumerate(fragments):
|
||||
frag_index += 1
|
||||
if frag_index <= ctx['fragment_index']:
|
||||
continue
|
||||
@ -36,7 +38,11 @@ class DashSegmentsFD(FragmentFD):
|
||||
count = 0
|
||||
while count <= fragment_retries:
|
||||
try:
|
||||
success, frag_content = self._download_fragment(ctx, segment['url'], info_dict)
|
||||
fragment_url = fragment.get('url')
|
||||
if not fragment_url:
|
||||
assert fragment_base_url
|
||||
fragment_url = urljoin(fragment_base_url, fragment['path'])
|
||||
success, frag_content = self._download_fragment(ctx, fragment_url, info_dict)
|
||||
if not success:
|
||||
return False
|
||||
self._append_fragment(ctx, frag_content)
|
||||
|
@ -59,9 +59,9 @@ class HlsFD(FragmentFD):
|
||||
man_url = info_dict['url']
|
||||
self.to_screen('[%s] Downloading m3u8 manifest' % self.FD_NAME)
|
||||
|
||||
manifest = self.ydl.urlopen(self._prepare_url(info_dict, man_url)).read()
|
||||
|
||||
s = manifest.decode('utf-8', 'ignore')
|
||||
urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
|
||||
man_url = urlh.geturl()
|
||||
s = urlh.read().decode('utf-8', 'ignore')
|
||||
|
||||
if not self.can_download(s, info_dict):
|
||||
if info_dict.get('extra_param_to_segment_url'):
|
||||
|
@ -98,7 +98,7 @@ def write_piff_header(stream, params):
|
||||
|
||||
if is_audio:
|
||||
smhd_payload = s88.pack(0) # balance
|
||||
smhd_payload = u16.pack(0) # reserved
|
||||
smhd_payload += u16.pack(0) # reserved
|
||||
media_header_box = full_box(b'smhd', 0, 0, smhd_payload) # Sound Media Header
|
||||
else:
|
||||
vmhd_payload = u16.pack(0) # graphics mode
|
||||
@ -126,7 +126,6 @@ def write_piff_header(stream, params):
|
||||
if fourcc == 'AACL':
|
||||
sample_entry_box = box(b'mp4a', sample_entry_payload)
|
||||
else:
|
||||
sample_entry_payload = sample_entry_payload
|
||||
sample_entry_payload += u16.pack(0) # pre defined
|
||||
sample_entry_payload += u16.pack(0) # reserved
|
||||
sample_entry_payload += u32.pack(0) * 3 # pre defined
|
||||
|
@ -107,11 +107,13 @@ class ADNIE(InfoExtractor):
|
||||
metas = options.get('metas') or {}
|
||||
title = metas.get('title') or video_info['title']
|
||||
links = player_config.get('links') or {}
|
||||
error = None
|
||||
if not links:
|
||||
links_url = player_config['linksurl']
|
||||
links_data = self._download_json(urljoin(
|
||||
self._BASE_URL, links_url), video_id)
|
||||
links = links_data.get('links') or {}
|
||||
error = links_data.get('error')
|
||||
|
||||
formats = []
|
||||
for format_id, qualities in links.items():
|
||||
@ -130,7 +132,8 @@ class ADNIE(InfoExtractor):
|
||||
for f in m3u8_formats:
|
||||
f['language'] = 'fr'
|
||||
formats.extend(m3u8_formats)
|
||||
error = options.get('error')
|
||||
if not error:
|
||||
error = options.get('error')
|
||||
if not formats and error:
|
||||
raise ExtractorError('%s said: %s' % (self.IE_NAME, error), expected=True)
|
||||
self._sort_formats(formats)
|
||||
|
@ -3,9 +3,10 @@ from __future__ import unicode_literals
|
||||
|
||||
from .theplatform import ThePlatformIE
|
||||
from ..utils import (
|
||||
update_url_query,
|
||||
parse_age_limit,
|
||||
int_or_none,
|
||||
parse_age_limit,
|
||||
try_get,
|
||||
update_url_query,
|
||||
)
|
||||
|
||||
|
||||
@ -68,7 +69,8 @@ class AMCNetworksIE(ThePlatformIE):
|
||||
info = self._parse_theplatform_metadata(theplatform_metadata)
|
||||
video_id = theplatform_metadata['pid']
|
||||
title = theplatform_metadata['title']
|
||||
rating = theplatform_metadata['ratings'][0]['rating']
|
||||
rating = try_get(
|
||||
theplatform_metadata, lambda x: x['ratings'][0]['rating'])
|
||||
auth_required = self._search_regex(
|
||||
r'window\.authRequired\s*=\s*(true|false);',
|
||||
webpage, 'auth required')
|
||||
|
@ -3,13 +3,13 @@ from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
HEADRequest,
|
||||
int_or_none,
|
||||
mimetype2ext,
|
||||
)
|
||||
|
||||
|
||||
class AparatIE(InfoExtractor):
|
||||
_VALID_URL = r'^https?://(?:www\.)?aparat\.com/(?:v/|video/video/embed/videohash/)(?P<id>[a-zA-Z0-9]+)'
|
||||
_VALID_URL = r'https?://(?:www\.)?aparat\.com/(?:v/|video/video/embed/videohash/)(?P<id>[a-zA-Z0-9]+)'
|
||||
|
||||
_TEST = {
|
||||
'url': 'http://www.aparat.com/v/wP8On',
|
||||
@ -29,30 +29,41 @@ class AparatIE(InfoExtractor):
|
||||
# Note: There is an easier-to-parse configuration at
|
||||
# http://www.aparat.com/video/video/config/videohash/%video_id
|
||||
# but the URL in there does not work
|
||||
embed_url = 'http://www.aparat.com/video/video/embed/vt/frame/showvideo/yes/videohash/' + video_id
|
||||
webpage = self._download_webpage(embed_url, video_id)
|
||||
|
||||
file_list = self._parse_json(self._search_regex(
|
||||
r'fileList\s*=\s*JSON\.parse\(\'([^\']+)\'\)', webpage, 'file list'), video_id)
|
||||
for i, item in enumerate(file_list[0]):
|
||||
video_url = item['file']
|
||||
req = HEADRequest(video_url)
|
||||
res = self._request_webpage(
|
||||
req, video_id, note='Testing video URL %d' % i, errnote=False)
|
||||
if res:
|
||||
break
|
||||
else:
|
||||
raise ExtractorError('No working video URLs found')
|
||||
webpage = self._download_webpage(
|
||||
'http://www.aparat.com/video/video/embed/vt/frame/showvideo/yes/videohash/' + video_id,
|
||||
video_id)
|
||||
|
||||
title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, 'title')
|
||||
|
||||
file_list = self._parse_json(
|
||||
self._search_regex(
|
||||
r'fileList\s*=\s*JSON\.parse\(\'([^\']+)\'\)', webpage,
|
||||
'file list'),
|
||||
video_id)
|
||||
|
||||
formats = []
|
||||
for item in file_list[0]:
|
||||
file_url = item.get('file')
|
||||
if not file_url:
|
||||
continue
|
||||
ext = mimetype2ext(item.get('type'))
|
||||
label = item.get('label')
|
||||
formats.append({
|
||||
'url': file_url,
|
||||
'ext': ext,
|
||||
'format_id': label or ext,
|
||||
'height': int_or_none(self._search_regex(
|
||||
r'(\d+)[pP]', label or '', 'height', default=None)),
|
||||
})
|
||||
self._sort_formats(formats)
|
||||
|
||||
thumbnail = self._search_regex(
|
||||
r'image:\s*"([^"]+)"', webpage, 'thumbnail', fatal=False)
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'url': video_url,
|
||||
'ext': 'mp4',
|
||||
'thumbnail': thumbnail,
|
||||
'age_limit': self._family_friendly_search(webpage),
|
||||
'formats': formats,
|
||||
}
|
||||
|
@ -93,6 +93,7 @@ class ARDMediathekIE(InfoExtractor):
|
||||
|
||||
duration = int_or_none(media_info.get('_duration'))
|
||||
thumbnail = media_info.get('_previewImage')
|
||||
is_live = media_info.get('_isLive') is True
|
||||
|
||||
subtitles = {}
|
||||
subtitle_url = media_info.get('_subtitleUrl')
|
||||
@ -106,6 +107,7 @@ class ARDMediathekIE(InfoExtractor):
|
||||
'id': video_id,
|
||||
'duration': duration,
|
||||
'thumbnail': thumbnail,
|
||||
'is_live': is_live,
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
}
|
||||
@ -166,9 +168,11 @@ class ARDMediathekIE(InfoExtractor):
|
||||
# determine video id from url
|
||||
m = re.match(self._VALID_URL, url)
|
||||
|
||||
document_id = None
|
||||
|
||||
numid = re.search(r'documentId=([0-9]+)', url)
|
||||
if numid:
|
||||
video_id = numid.group(1)
|
||||
document_id = video_id = numid.group(1)
|
||||
else:
|
||||
video_id = m.group('video_id')
|
||||
|
||||
@ -228,12 +232,16 @@ class ARDMediathekIE(InfoExtractor):
|
||||
'formats': formats,
|
||||
}
|
||||
else: # request JSON file
|
||||
if not document_id:
|
||||
video_id = self._search_regex(
|
||||
r'/play/(?:config|media)/(\d+)', webpage, 'media id')
|
||||
info = self._extract_media_info(
|
||||
'http://www.ardmediathek.de/play/media/%s' % video_id, webpage, video_id)
|
||||
'http://www.ardmediathek.de/play/media/%s' % video_id,
|
||||
webpage, video_id)
|
||||
|
||||
info.update({
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'title': self._live_title(title) if info.get('is_live') else title,
|
||||
'description': description,
|
||||
'thumbnail': thumbnail,
|
||||
})
|
||||
|
@ -9,12 +9,13 @@ from ..compat import (
|
||||
compat_urllib_parse_urlparse,
|
||||
)
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
find_xpath_attr,
|
||||
unified_strdate,
|
||||
get_element_by_attribute,
|
||||
int_or_none,
|
||||
NO_DEFAULT,
|
||||
qualities,
|
||||
unified_strdate,
|
||||
)
|
||||
|
||||
# There are different sources of video in arte.tv, the extraction process
|
||||
@ -79,6 +80,13 @@ class ArteTVBaseIE(InfoExtractor):
|
||||
info = self._download_json(json_url, video_id)
|
||||
player_info = info['videoJsonPlayer']
|
||||
|
||||
vsr = player_info['VSR']
|
||||
|
||||
if not vsr and not player_info.get('VRU'):
|
||||
raise ExtractorError(
|
||||
'Video %s is not available' % player_info.get('VID') or video_id,
|
||||
expected=True)
|
||||
|
||||
upload_date_str = player_info.get('shootingDate')
|
||||
if not upload_date_str:
|
||||
upload_date_str = (player_info.get('VRA') or player_info.get('VDA') or '').split(' ')[0]
|
||||
@ -107,7 +115,7 @@ class ArteTVBaseIE(InfoExtractor):
|
||||
langcode = LANGS.get(lang, lang)
|
||||
|
||||
formats = []
|
||||
for format_id, format_dict in player_info['VSR'].items():
|
||||
for format_id, format_dict in vsr.items():
|
||||
f = dict(format_dict)
|
||||
versionCode = f.get('versionCode')
|
||||
l = re.escape(langcode)
|
||||
|
@ -43,7 +43,7 @@ class AudioBoomIE(InfoExtractor):
|
||||
|
||||
def from_clip(field):
|
||||
if clip:
|
||||
clip.get(field)
|
||||
return clip.get(field)
|
||||
|
||||
audio_url = from_clip('clipURLPriorToLoading') or self._og_search_property(
|
||||
'audio', webpage, 'audio url')
|
||||
|
@ -242,7 +242,12 @@ class BandcampAlbumIE(InfoExtractor):
|
||||
raise ExtractorError('The page doesn\'t contain any tracks')
|
||||
# Only tracks with duration info have songs
|
||||
entries = [
|
||||
self.url_result(compat_urlparse.urljoin(url, t_path), ie=BandcampIE.ie_key())
|
||||
self.url_result(
|
||||
compat_urlparse.urljoin(url, t_path),
|
||||
ie=BandcampIE.ie_key(),
|
||||
video_title=self._search_regex(
|
||||
r'<span\b[^>]+\bitemprop=["\']name["\'][^>]*>([^<]+)',
|
||||
elem_content, 'track title', fatal=False))
|
||||
for elem_content, t_path in track_elements
|
||||
if self._html_search_meta('duration', elem_content, default=None)]
|
||||
|
||||
|
@ -37,7 +37,8 @@ class BBCCoUkIE(InfoExtractor):
|
||||
programmes/(?!articles/)|
|
||||
iplayer(?:/[^/]+)?/(?:episode/|playlist/)|
|
||||
music/(?:clips|audiovideo/popular)[/#]|
|
||||
radio/player/
|
||||
radio/player/|
|
||||
events/[^/]+/play/[^/]+/
|
||||
)
|
||||
(?P<id>%s)(?!/(?:episodes|broadcasts|clips))
|
||||
''' % _ID_REGEX
|
||||
|
@ -124,7 +124,7 @@ class CDAIE(InfoExtractor):
|
||||
}
|
||||
|
||||
def extract_format(page, version):
|
||||
json_str = self._search_regex(
|
||||
json_str = self._html_search_regex(
|
||||
r'player_data=(\\?["\'])(?P<player_data>.+?)\1', page,
|
||||
'%s player_json' % version, fatal=False, group='player_data')
|
||||
if not json_str:
|
||||
|
@ -9,12 +9,20 @@ from ..utils import (
|
||||
|
||||
|
||||
class CinchcastIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://player\.cinchcast\.com/.*?assetId=(?P<id>[0-9]+)'
|
||||
_TEST = {
|
||||
_VALID_URL = r'https?://player\.cinchcast\.com/.*?(?:assetId|show_id)=(?P<id>[0-9]+)'
|
||||
_TESTS = [{
|
||||
'url': 'http://player.cinchcast.com/?show_id=5258197&platformId=1&assetType=single',
|
||||
'info_dict': {
|
||||
'id': '5258197',
|
||||
'ext': 'mp3',
|
||||
'title': 'Train Your Brain to Up Your Game with Coach Mandy',
|
||||
'upload_date': '20130816',
|
||||
},
|
||||
}, {
|
||||
# Actual test is run in generic, look for undergroundwellness
|
||||
'url': 'http://player.cinchcast.com/?platformId=1&assetType=single&assetId=7141703',
|
||||
'only_matching': True,
|
||||
}
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
@ -1,67 +0,0 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
int_or_none,
|
||||
unified_strdate,
|
||||
)
|
||||
|
||||
|
||||
class ClipfishIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?clipfish\.de/(?:[^/]+/)+video/(?P<id>[0-9]+)'
|
||||
_TEST = {
|
||||
'url': 'http://www.clipfish.de/special/ugly-americans/video/4343170/s01-e01-ugly-americans-date-in-der-hoelle/',
|
||||
'md5': 'b9a5dc46294154c1193e2d10e0c95693',
|
||||
'info_dict': {
|
||||
'id': '4343170',
|
||||
'ext': 'mp4',
|
||||
'title': 'S01 E01 - Ugly Americans - Date in der Hölle',
|
||||
'description': 'Mark Lilly arbeitet im Sozialdienst der Stadt New York und soll Immigranten bei ihrer Einbürgerung in die USA zur Seite stehen.',
|
||||
'upload_date': '20161005',
|
||||
'duration': 1291,
|
||||
'view_count': int,
|
||||
}
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
video_info = self._download_json(
|
||||
'http://www.clipfish.de/devapi/id/%s?format=json&apikey=hbbtv' % video_id,
|
||||
video_id)['items'][0]
|
||||
|
||||
formats = []
|
||||
|
||||
m3u8_url = video_info.get('media_videourl_hls')
|
||||
if m3u8_url:
|
||||
formats.append({
|
||||
'url': m3u8_url.replace('de.hls.fra.clipfish.de', 'hls.fra.clipfish.de'),
|
||||
'ext': 'mp4',
|
||||
'format_id': 'hls',
|
||||
})
|
||||
|
||||
mp4_url = video_info.get('media_videourl')
|
||||
if mp4_url:
|
||||
formats.append({
|
||||
'url': mp4_url,
|
||||
'format_id': 'mp4',
|
||||
'width': int_or_none(video_info.get('width')),
|
||||
'height': int_or_none(video_info.get('height')),
|
||||
'tbr': int_or_none(video_info.get('bitrate')),
|
||||
})
|
||||
|
||||
descr = video_info.get('descr')
|
||||
if descr:
|
||||
descr = descr.strip()
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': video_info['title'],
|
||||
'description': descr,
|
||||
'formats': formats,
|
||||
'thumbnail': video_info.get('media_content_thumbnail_large') or video_info.get('media_thumbnail'),
|
||||
'duration': int_or_none(video_info.get('media_length')),
|
||||
'upload_date': unified_strdate(video_info.get('pubDate')),
|
||||
'view_count': int_or_none(video_info.get('media_views'))
|
||||
}
|
74
youtube_dl/extractor/clippit.py
Normal file
74
youtube_dl/extractor/clippit.py
Normal file
@ -0,0 +1,74 @@
|
||||
# coding: utf-8
|
||||
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
parse_iso8601,
|
||||
qualities,
|
||||
)
|
||||
|
||||
import re
|
||||
|
||||
|
||||
class ClippitIE(InfoExtractor):
|
||||
|
||||
_VALID_URL = r'https?://(?:www\.)?clippituser\.tv/c/(?P<id>[a-z]+)'
|
||||
_TEST = {
|
||||
'url': 'https://www.clippituser.tv/c/evmgm',
|
||||
'md5': '963ae7a59a2ec4572ab8bf2f2d2c5f09',
|
||||
'info_dict': {
|
||||
'id': 'evmgm',
|
||||
'ext': 'mp4',
|
||||
'title': 'Bye bye Brutus. #BattleBots - Clippit',
|
||||
'uploader': 'lizllove',
|
||||
'uploader_url': 'https://www.clippituser.tv/p/lizllove',
|
||||
'timestamp': 1472183818,
|
||||
'upload_date': '20160826',
|
||||
'description': 'BattleBots | ABC',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
}
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
title = self._html_search_regex(r'<title.*>(.+?)</title>', webpage, 'title')
|
||||
|
||||
FORMATS = ('sd', 'hd')
|
||||
quality = qualities(FORMATS)
|
||||
formats = []
|
||||
for format_id in FORMATS:
|
||||
url = self._html_search_regex(r'data-%s-file="(.+?)"' % format_id,
|
||||
webpage, 'url', fatal=False)
|
||||
if not url:
|
||||
continue
|
||||
match = re.search(r'/(?P<height>\d+)\.mp4', url)
|
||||
formats.append({
|
||||
'url': url,
|
||||
'format_id': format_id,
|
||||
'quality': quality(format_id),
|
||||
'height': int(match.group('height')) if match else None,
|
||||
})
|
||||
|
||||
uploader = self._html_search_regex(r'class="username".*>\s+(.+?)\n',
|
||||
webpage, 'uploader', fatal=False)
|
||||
uploader_url = ('https://www.clippituser.tv/p/' + uploader
|
||||
if uploader else None)
|
||||
|
||||
timestamp = self._html_search_regex(r'datetime="(.+?)"',
|
||||
webpage, 'date', fatal=False)
|
||||
thumbnail = self._html_search_regex(r'data-image="(.+?)"',
|
||||
webpage, 'thumbnail', fatal=False)
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'formats': formats,
|
||||
'uploader': uploader,
|
||||
'uploader_url': uploader_url,
|
||||
'timestamp': parse_iso8601(timestamp),
|
||||
'description': self._og_search_description(webpage),
|
||||
'thumbnail': thumbnail,
|
||||
}
|
@ -30,7 +30,11 @@ class CloudyIE(InfoExtractor):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(
|
||||
'http://www.cloudy.ec/embed.php?id=%s' % video_id, video_id)
|
||||
'https://www.cloudy.ec/embed.php', video_id, query={
|
||||
'id': video_id,
|
||||
'playerPage': 1,
|
||||
'autoplay': 1,
|
||||
})
|
||||
|
||||
info = self._parse_html5_media_entries(url, webpage, video_id)[0]
|
||||
|
||||
|
@ -27,6 +27,7 @@ from ..compat import (
|
||||
compat_urllib_parse_urlencode,
|
||||
compat_urllib_request,
|
||||
compat_urlparse,
|
||||
compat_xml_parse_error,
|
||||
)
|
||||
from ..downloader.f4m import remove_encrypted_media
|
||||
from ..utils import (
|
||||
@ -646,15 +647,29 @@ class InfoExtractor(object):
|
||||
|
||||
def _download_xml(self, url_or_request, video_id,
|
||||
note='Downloading XML', errnote='Unable to download XML',
|
||||
transform_source=None, fatal=True, encoding=None, data=None, headers={}, query={}):
|
||||
transform_source=None, fatal=True, encoding=None,
|
||||
data=None, headers={}, query={}):
|
||||
"""Return the xml as an xml.etree.ElementTree.Element"""
|
||||
xml_string = self._download_webpage(
|
||||
url_or_request, video_id, note, errnote, fatal=fatal, encoding=encoding, data=data, headers=headers, query=query)
|
||||
url_or_request, video_id, note, errnote, fatal=fatal,
|
||||
encoding=encoding, data=data, headers=headers, query=query)
|
||||
if xml_string is False:
|
||||
return xml_string
|
||||
return self._parse_xml(
|
||||
xml_string, video_id, transform_source=transform_source,
|
||||
fatal=fatal)
|
||||
|
||||
def _parse_xml(self, xml_string, video_id, transform_source=None, fatal=True):
|
||||
if transform_source:
|
||||
xml_string = transform_source(xml_string)
|
||||
return compat_etree_fromstring(xml_string.encode('utf-8'))
|
||||
try:
|
||||
return compat_etree_fromstring(xml_string.encode('utf-8'))
|
||||
except compat_xml_parse_error as ve:
|
||||
errmsg = '%s: Failed to parse XML ' % video_id
|
||||
if fatal:
|
||||
raise ExtractorError(errmsg, cause=ve)
|
||||
else:
|
||||
self.report_warning(errmsg + str(ve))
|
||||
|
||||
def _download_json(self, url_or_request, video_id,
|
||||
note='Downloading JSON metadata',
|
||||
@ -730,12 +745,12 @@ class InfoExtractor(object):
|
||||
video_info['title'] = video_title
|
||||
return video_info
|
||||
|
||||
def playlist_from_matches(self, matches, video_id, video_title, getter=None, ie=None):
|
||||
urlrs = orderedSet(
|
||||
def playlist_from_matches(self, matches, playlist_id=None, playlist_title=None, getter=None, ie=None):
|
||||
urls = orderedSet(
|
||||
self.url_result(self._proto_relative_url(getter(m) if getter else m), ie)
|
||||
for m in matches)
|
||||
return self.playlist_result(
|
||||
urlrs, playlist_id=video_id, playlist_title=video_title)
|
||||
urls, playlist_id=playlist_id, playlist_title=playlist_title)
|
||||
|
||||
@staticmethod
|
||||
def playlist_result(entries, playlist_id=None, playlist_title=None, playlist_description=None):
|
||||
@ -940,7 +955,8 @@ class InfoExtractor(object):
|
||||
|
||||
def _family_friendly_search(self, html):
|
||||
# See http://schema.org/VideoObject
|
||||
family_friendly = self._html_search_meta('isFamilyFriendly', html)
|
||||
family_friendly = self._html_search_meta(
|
||||
'isFamilyFriendly', html, default=None)
|
||||
|
||||
if not family_friendly:
|
||||
return None
|
||||
@ -1785,7 +1801,7 @@ class InfoExtractor(object):
|
||||
ms_info['timescale'] = int(timescale)
|
||||
segment_duration = source.get('duration')
|
||||
if segment_duration:
|
||||
ms_info['segment_duration'] = int(segment_duration)
|
||||
ms_info['segment_duration'] = float(segment_duration)
|
||||
|
||||
def extract_Initialization(source):
|
||||
initialization = source.find(_add_ns('Initialization'))
|
||||
@ -1892,9 +1908,13 @@ class InfoExtractor(object):
|
||||
'Bandwidth': bandwidth,
|
||||
}
|
||||
|
||||
def location_key(location):
|
||||
return 'url' if re.match(r'^https?://', location) else 'path'
|
||||
|
||||
if 'segment_urls' not in representation_ms_info and 'media' in representation_ms_info:
|
||||
|
||||
media_template = prepare_template('media', ('Number', 'Bandwidth', 'Time'))
|
||||
media_location_key = location_key(media_template)
|
||||
|
||||
# As per [1, 5.3.9.4.4, Table 16, page 55] $Number$ and $Time$
|
||||
# can't be used at the same time
|
||||
@ -1904,7 +1924,7 @@ class InfoExtractor(object):
|
||||
segment_duration = float_or_none(representation_ms_info['segment_duration'], representation_ms_info['timescale'])
|
||||
representation_ms_info['total_number'] = int(math.ceil(float(period_duration) / segment_duration))
|
||||
representation_ms_info['fragments'] = [{
|
||||
'url': media_template % {
|
||||
media_location_key: media_template % {
|
||||
'Number': segment_number,
|
||||
'Bandwidth': bandwidth,
|
||||
},
|
||||
@ -1928,7 +1948,7 @@ class InfoExtractor(object):
|
||||
'Number': segment_number,
|
||||
}
|
||||
representation_ms_info['fragments'].append({
|
||||
'url': segment_url,
|
||||
media_location_key: segment_url,
|
||||
'duration': float_or_none(segment_d, representation_ms_info['timescale']),
|
||||
})
|
||||
|
||||
@ -1952,8 +1972,9 @@ class InfoExtractor(object):
|
||||
for s in representation_ms_info['s']:
|
||||
duration = float_or_none(s['d'], timescale)
|
||||
for r in range(s.get('r', 0) + 1):
|
||||
segment_uri = representation_ms_info['segment_urls'][segment_index]
|
||||
fragments.append({
|
||||
'url': representation_ms_info['segment_urls'][segment_index],
|
||||
location_key(segment_uri): segment_uri,
|
||||
'duration': duration,
|
||||
})
|
||||
segment_index += 1
|
||||
@ -1962,6 +1983,7 @@ class InfoExtractor(object):
|
||||
# No fragments key is present in this case.
|
||||
if 'fragments' in representation_ms_info:
|
||||
f.update({
|
||||
'fragment_base_url': base_url,
|
||||
'fragments': [],
|
||||
'protocol': 'http_dash_segments',
|
||||
})
|
||||
@ -1969,10 +1991,8 @@ class InfoExtractor(object):
|
||||
initialization_url = representation_ms_info['initialization_url']
|
||||
if not f.get('url'):
|
||||
f['url'] = initialization_url
|
||||
f['fragments'].append({'url': initialization_url})
|
||||
f['fragments'].append({location_key(initialization_url): initialization_url})
|
||||
f['fragments'].extend(representation_ms_info['fragments'])
|
||||
for fragment in f['fragments']:
|
||||
fragment['url'] = urljoin(base_url, fragment['url'])
|
||||
try:
|
||||
existing_format = next(
|
||||
fo for fo in formats
|
||||
@ -2110,19 +2130,19 @@ class InfoExtractor(object):
|
||||
return f
|
||||
return {}
|
||||
|
||||
def _media_formats(src, cur_media_type):
|
||||
def _media_formats(src, cur_media_type, type_info={}):
|
||||
full_url = absolute_url(src)
|
||||
ext = determine_ext(full_url)
|
||||
ext = type_info.get('ext') or determine_ext(full_url)
|
||||
if ext == 'm3u8':
|
||||
is_plain_url = False
|
||||
formats = self._extract_m3u8_formats(
|
||||
full_url, video_id, ext='mp4',
|
||||
entry_protocol=m3u8_entry_protocol, m3u8_id=m3u8_id,
|
||||
preference=preference)
|
||||
preference=preference, fatal=False)
|
||||
elif ext == 'mpd':
|
||||
is_plain_url = False
|
||||
formats = self._extract_mpd_formats(
|
||||
full_url, video_id, mpd_id=mpd_id)
|
||||
full_url, video_id, mpd_id=mpd_id, fatal=False)
|
||||
else:
|
||||
is_plain_url = True
|
||||
formats = [{
|
||||
@ -2161,9 +2181,9 @@ class InfoExtractor(object):
|
||||
src = source_attributes.get('src')
|
||||
if not src:
|
||||
continue
|
||||
is_plain_url, formats = _media_formats(src, media_type)
|
||||
f = parse_content_type(source_attributes.get('type'))
|
||||
is_plain_url, formats = _media_formats(src, media_type, f)
|
||||
if is_plain_url:
|
||||
f = parse_content_type(source_attributes.get('type'))
|
||||
f.update(formats[0])
|
||||
media_info['formats'].append(f)
|
||||
else:
|
||||
|
@ -510,7 +510,7 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
|
||||
|
||||
# webpage provide more accurate data than series_title from XML
|
||||
series = self._html_search_regex(
|
||||
r'id=["\']showmedia_about_episode_num[^>]+>\s*<a[^>]+>([^<]+)',
|
||||
r'(?s)<h\d[^>]+\bid=["\']showmedia_about_episode_num[^>]+>(.+?)</h\d',
|
||||
webpage, 'series', fatal=False)
|
||||
season = xpath_text(metadata, 'series_title')
|
||||
|
||||
@ -518,7 +518,7 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
|
||||
episode_number = int_or_none(xpath_text(metadata, 'episode_number'))
|
||||
|
||||
season_number = int_or_none(self._search_regex(
|
||||
r'(?s)<h4[^>]+id=["\']showmedia_about_episode_num[^>]+>.+?</h4>\s*<h4>\s*Season (\d+)',
|
||||
r'(?s)<h\d[^>]+id=["\']showmedia_about_episode_num[^>]+>.+?</h\d>\s*<h4>\s*Season (\d+)',
|
||||
webpage, 'season number', default=None))
|
||||
|
||||
return {
|
||||
|
@ -13,7 +13,7 @@ from ..utils import (
|
||||
|
||||
|
||||
class DigitallySpeakingIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:evt\.dispeak|events\.digitallyspeaking)\.com/(?:[^/]+/)+xml/(?P<id>[^.]+)\.xml'
|
||||
_VALID_URL = r'https?://(?:s?evt\.dispeak|events\.digitallyspeaking)\.com/(?:[^/]+/)+xml/(?P<id>[^.]+)\.xml'
|
||||
|
||||
_TESTS = [{
|
||||
# From http://gdcvault.com/play/1023460/Tenacious-Design-and-The-Interface
|
||||
@ -28,6 +28,10 @@ class DigitallySpeakingIE(InfoExtractor):
|
||||
# From http://www.gdcvault.com/play/1014631/Classic-Game-Postmortem-PAC
|
||||
'url': 'http://events.digitallyspeaking.com/gdc/sf11/xml/12396_1299111843500GMPX.xml',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# From http://www.gdcvault.com/play/1013700/Advanced-Material
|
||||
'url': 'http://sevt.dispeak.com/ubm/gdc/eur10/xml/11256_1282118587281VNIT.xml',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _parse_mp4(self, metadata):
|
||||
|
@ -7,16 +7,18 @@ import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import (
|
||||
compat_urlparse,
|
||||
compat_HTTPError,
|
||||
compat_str,
|
||||
compat_urlparse,
|
||||
)
|
||||
from ..utils import (
|
||||
USER_AGENTS,
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
unified_strdate,
|
||||
remove_end,
|
||||
try_get,
|
||||
unified_strdate,
|
||||
update_url_query,
|
||||
USER_AGENTS,
|
||||
)
|
||||
|
||||
|
||||
@ -183,28 +185,44 @@ class DPlayItIE(InfoExtractor):
|
||||
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
|
||||
info_url = self._search_regex(
|
||||
r'url\s*[:=]\s*["\']((?:https?:)?//[^/]+/playback/videoPlaybackInfo/\d+)',
|
||||
webpage, 'video id')
|
||||
|
||||
title = remove_end(self._og_search_title(webpage), ' | Dplay')
|
||||
|
||||
try:
|
||||
info = self._download_json(
|
||||
info_url, display_id, headers={
|
||||
'Authorization': 'Bearer %s' % self._get_cookies(url).get(
|
||||
'dplayit_token').value,
|
||||
'Referer': url,
|
||||
})
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, compat_HTTPError) and e.cause.code in (400, 403):
|
||||
info = self._parse_json(e.cause.read().decode('utf-8'), display_id)
|
||||
error = info['errors'][0]
|
||||
if error.get('code') == 'access.denied.geoblocked':
|
||||
self.raise_geo_restricted(
|
||||
msg=error.get('detail'), countries=self._GEO_COUNTRIES)
|
||||
raise ExtractorError(info['errors'][0]['detail'], expected=True)
|
||||
raise
|
||||
video_id = None
|
||||
|
||||
info = self._search_regex(
|
||||
r'playback_json\s*:\s*JSON\.parse\s*\(\s*("(?:\\.|[^"\\])+?")',
|
||||
webpage, 'playback JSON', default=None)
|
||||
if info:
|
||||
for _ in range(2):
|
||||
info = self._parse_json(info, display_id, fatal=False)
|
||||
if not info:
|
||||
break
|
||||
else:
|
||||
video_id = try_get(info, lambda x: x['data']['id'])
|
||||
|
||||
if not info:
|
||||
info_url = self._search_regex(
|
||||
r'url\s*[:=]\s*["\']((?:https?:)?//[^/]+/playback/videoPlaybackInfo/\d+)',
|
||||
webpage, 'info url')
|
||||
|
||||
video_id = info_url.rpartition('/')[-1]
|
||||
|
||||
try:
|
||||
info = self._download_json(
|
||||
info_url, display_id, headers={
|
||||
'Authorization': 'Bearer %s' % self._get_cookies(url).get(
|
||||
'dplayit_token').value,
|
||||
'Referer': url,
|
||||
})
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, compat_HTTPError) and e.cause.code in (400, 403):
|
||||
info = self._parse_json(e.cause.read().decode('utf-8'), display_id)
|
||||
error = info['errors'][0]
|
||||
if error.get('code') == 'access.denied.geoblocked':
|
||||
self.raise_geo_restricted(
|
||||
msg=error.get('detail'), countries=self._GEO_COUNTRIES)
|
||||
raise ExtractorError(info['errors'][0]['detail'], expected=True)
|
||||
raise
|
||||
|
||||
hls_url = info['data']['attributes']['streaming']['hls']['url']
|
||||
|
||||
@ -230,7 +248,7 @@ class DPlayItIE(InfoExtractor):
|
||||
season_number = episode_number = upload_date = None
|
||||
|
||||
return {
|
||||
'id': info_url.rpartition('/')[-1],
|
||||
'id': compat_str(video_id or display_id),
|
||||
'display_id': display_id,
|
||||
'title': title,
|
||||
'description': self._og_search_description(webpage),
|
||||
|
@ -12,6 +12,7 @@ from ..utils import (
|
||||
ExtractorError,
|
||||
clean_html,
|
||||
int_or_none,
|
||||
remove_end,
|
||||
sanitized_Request,
|
||||
urlencode_postdata
|
||||
)
|
||||
@ -72,15 +73,15 @@ class DramaFeverIE(DramaFeverBaseIE):
|
||||
'url': 'http://www.dramafever.com/drama/4512/1/Cooking_with_Shin/',
|
||||
'info_dict': {
|
||||
'id': '4512.1',
|
||||
'ext': 'mp4',
|
||||
'title': 'Cooking with Shin 4512.1',
|
||||
'ext': 'flv',
|
||||
'title': 'Cooking with Shin',
|
||||
'description': 'md5:a8eec7942e1664a6896fcd5e1287bfd0',
|
||||
'episode': 'Episode 1',
|
||||
'episode_number': 1,
|
||||
'thumbnail': r're:^https?://.*\.jpg',
|
||||
'timestamp': 1404336058,
|
||||
'upload_date': '20140702',
|
||||
'duration': 343,
|
||||
'duration': 344,
|
||||
},
|
||||
'params': {
|
||||
# m3u8 download
|
||||
@ -90,15 +91,15 @@ class DramaFeverIE(DramaFeverBaseIE):
|
||||
'url': 'http://www.dramafever.com/drama/4826/4/Mnet_Asian_Music_Awards_2015/?ap=1',
|
||||
'info_dict': {
|
||||
'id': '4826.4',
|
||||
'ext': 'mp4',
|
||||
'title': 'Mnet Asian Music Awards 2015 4826.4',
|
||||
'ext': 'flv',
|
||||
'title': 'Mnet Asian Music Awards 2015',
|
||||
'description': 'md5:3ff2ee8fedaef86e076791c909cf2e91',
|
||||
'episode': 'Mnet Asian Music Awards 2015 - Part 3',
|
||||
'episode_number': 4,
|
||||
'thumbnail': r're:^https?://.*\.jpg',
|
||||
'timestamp': 1450213200,
|
||||
'upload_date': '20151215',
|
||||
'duration': 5602,
|
||||
'duration': 5359,
|
||||
},
|
||||
'params': {
|
||||
# m3u8 download
|
||||
@ -122,6 +123,10 @@ class DramaFeverIE(DramaFeverBaseIE):
|
||||
countries=self._GEO_COUNTRIES)
|
||||
raise
|
||||
|
||||
# title is postfixed with video id for some reason, removing
|
||||
if info.get('title'):
|
||||
info['title'] = remove_end(info['title'], video_id).strip()
|
||||
|
||||
series_id, episode_number = video_id.split('.')
|
||||
episode_info = self._download_json(
|
||||
# We only need a single episode info, so restricting page size to one episode
|
||||
|
@ -118,7 +118,7 @@ class DRTVIE(InfoExtractor):
|
||||
if target == 'HDS':
|
||||
f4m_formats = self._extract_f4m_formats(
|
||||
uri + '?hdcore=3.3.0&plugin=aasp-3.3.0.99.43',
|
||||
video_id, preference, f4m_id=format_id)
|
||||
video_id, preference, f4m_id=format_id, fatal=False)
|
||||
if kind == 'AudioResource':
|
||||
for f in f4m_formats:
|
||||
f['vcodec'] = 'none'
|
||||
@ -126,7 +126,8 @@ class DRTVIE(InfoExtractor):
|
||||
elif target == 'HLS':
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
uri, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||
preference=preference, m3u8_id=format_id))
|
||||
preference=preference, m3u8_id=format_id,
|
||||
fatal=False))
|
||||
else:
|
||||
bitrate = link.get('Bitrate')
|
||||
if bitrate:
|
||||
|
@ -2,6 +2,11 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
int_or_none,
|
||||
try_get,
|
||||
unified_timestamp,
|
||||
)
|
||||
|
||||
|
||||
class EggheadCourseIE(InfoExtractor):
|
||||
@ -33,3 +38,47 @@ class EggheadCourseIE(InfoExtractor):
|
||||
return self.playlist_result(
|
||||
entries, playlist_id, course.get('title'),
|
||||
course.get('description'))
|
||||
|
||||
|
||||
class EggheadLessonIE(InfoExtractor):
|
||||
IE_DESC = 'egghead.io lesson'
|
||||
IE_NAME = 'egghead:lesson'
|
||||
_VALID_URL = r'https://egghead\.io/lessons/(?P<id>[^/?#&]+)'
|
||||
_TEST = {
|
||||
'url': 'https://egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',
|
||||
'info_dict': {
|
||||
'id': 'fv5yotjxcg',
|
||||
'ext': 'mp4',
|
||||
'title': 'Create linear data flow with container style types (Box)',
|
||||
'description': 'md5:9aa2cdb6f9878ed4c39ec09e85a8150e',
|
||||
'thumbnail': r're:^https?:.*\.jpg$',
|
||||
'timestamp': 1481296768,
|
||||
'upload_date': '20161209',
|
||||
'duration': 304,
|
||||
'view_count': 0,
|
||||
'tags': ['javascript', 'free'],
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
lesson_id = self._match_id(url)
|
||||
|
||||
lesson = self._download_json(
|
||||
'https://egghead.io/api/v1/lessons/%s' % lesson_id, lesson_id)
|
||||
|
||||
return {
|
||||
'_type': 'url_transparent',
|
||||
'ie_key': 'Wistia',
|
||||
'url': 'wistia:%s' % lesson['wistia_id'],
|
||||
'id': lesson['wistia_id'],
|
||||
'title': lesson.get('title'),
|
||||
'description': lesson.get('summary'),
|
||||
'thumbnail': lesson.get('thumb_nail'),
|
||||
'timestamp': unified_timestamp(lesson.get('published_at')),
|
||||
'duration': int_or_none(lesson.get('duration')),
|
||||
'view_count': int_or_none(lesson.get('plays_count')),
|
||||
'tags': try_get(lesson, lambda x: x['tag_list'], list),
|
||||
}
|
||||
|
@ -190,8 +190,8 @@ from .chirbit import (
|
||||
)
|
||||
from .cinchcast import CinchcastIE
|
||||
from .cjsw import CJSWIE
|
||||
from .clipfish import ClipfishIE
|
||||
from .cliphunter import CliphunterIE
|
||||
from .clippit import ClippitIE
|
||||
from .cliprs import ClipRsIE
|
||||
from .clipsyndicate import ClipsyndicateIE
|
||||
from .closertotruth import CloserToTruthIE
|
||||
@ -302,7 +302,10 @@ from .dw import (
|
||||
from .eagleplatform import EaglePlatformIE
|
||||
from .ebaumsworld import EbaumsWorldIE
|
||||
from .echomsk import EchoMskIE
|
||||
from .egghead import EggheadCourseIE
|
||||
from .egghead import (
|
||||
EggheadCourseIE,
|
||||
EggheadLessonIE,
|
||||
)
|
||||
from .ehow import EHowIE
|
||||
from .eighttracks import EightTracksIE
|
||||
from .einthusan import EinthusanIE
|
||||
@ -352,7 +355,12 @@ from .flipagram import FlipagramIE
|
||||
from .folketinget import FolketingetIE
|
||||
from .footyroom import FootyRoomIE
|
||||
from .formula1 import Formula1IE
|
||||
from .fourtube import FourTubeIE
|
||||
from .fourtube import (
|
||||
FourTubeIE,
|
||||
PornTubeIE,
|
||||
PornerBrosIE,
|
||||
FuxIE,
|
||||
)
|
||||
from .fox import FOXIE
|
||||
from .fox9 import FOX9IE
|
||||
from .foxgay import FoxgayIE
|
||||
@ -505,6 +513,7 @@ from .la7 import LA7IE
|
||||
from .laola1tv import (
|
||||
Laola1TvEmbedIE,
|
||||
Laola1TvIE,
|
||||
ITTFIE,
|
||||
)
|
||||
from .lci import LCIIE
|
||||
from .lcp import (
|
||||
@ -532,7 +541,10 @@ from .limelight import (
|
||||
LimelightChannelListIE,
|
||||
)
|
||||
from .litv import LiTVIE
|
||||
from .liveleak import LiveLeakIE
|
||||
from .liveleak import (
|
||||
LiveLeakIE,
|
||||
LiveLeakEmbedIE,
|
||||
)
|
||||
from .livestream import (
|
||||
LivestreamIE,
|
||||
LivestreamOriginalIE,
|
||||
@ -559,6 +571,7 @@ from .matchtv import MatchTVIE
|
||||
from .mdr import MDRIE
|
||||
from .mediaset import MediasetIE
|
||||
from .medici import MediciIE
|
||||
from .megaphone import MegaphoneIE
|
||||
from .meipai import MeipaiIE
|
||||
from .melonvod import MelonVODIE
|
||||
from .meta import METAIE
|
||||
@ -585,7 +598,6 @@ from .mixcloud import (
|
||||
)
|
||||
from .mlb import MLBIE
|
||||
from .mnet import MnetIE
|
||||
from .mpora import MporaIE
|
||||
from .moevideo import MoeVideoIE
|
||||
from .mofosex import MofosexIE
|
||||
from .mojvideo import MojvideoIE
|
||||
@ -657,6 +669,10 @@ from .nextmedia import (
|
||||
AppleDailyIE,
|
||||
NextTVIE,
|
||||
)
|
||||
from .nexx import (
|
||||
NexxIE,
|
||||
NexxEmbedIE,
|
||||
)
|
||||
from .nfb import NFBIE
|
||||
from .nfl import NFLIE
|
||||
from .nhk import NhkVodIE
|
||||
@ -670,6 +686,7 @@ from .nick import (
|
||||
NickIE,
|
||||
NickDeIE,
|
||||
NickNightIE,
|
||||
NickRuIE,
|
||||
)
|
||||
from .niconico import NiconicoIE, NiconicoPlaylistIE
|
||||
from .ninecninemedia import (
|
||||
@ -765,6 +782,7 @@ from .pandoratv import PandoraTVIE
|
||||
from .parliamentliveuk import ParliamentLiveUKIE
|
||||
from .patreon import PatreonIE
|
||||
from .pbs import PBSIE
|
||||
from .pearvideo import PearVideoIE
|
||||
from .people import PeopleIE
|
||||
from .periscope import (
|
||||
PeriscopeIE,
|
||||
@ -836,6 +854,10 @@ from .rai import (
|
||||
from .rbmaradio import RBMARadioIE
|
||||
from .rds import RDSIE
|
||||
from .redbulltv import RedBullTVIE
|
||||
from .reddit import (
|
||||
RedditIE,
|
||||
RedditRIE,
|
||||
)
|
||||
from .redtube import RedTubeIE
|
||||
from .regiotv import RegioTVIE
|
||||
from .rentv import (
|
||||
@ -929,8 +951,9 @@ from .soundcloud import (
|
||||
SoundcloudIE,
|
||||
SoundcloudSetIE,
|
||||
SoundcloudUserIE,
|
||||
SoundcloudTrackStationIE,
|
||||
SoundcloudPlaylistIE,
|
||||
SoundcloudSearchIE
|
||||
SoundcloudSearchIE,
|
||||
)
|
||||
from .soundgasm import (
|
||||
SoundgasmIE,
|
||||
@ -988,7 +1011,6 @@ from .teachertube import (
|
||||
)
|
||||
from .teachingchannel import TeachingChannelIE
|
||||
from .teamcoco import TeamcocoIE
|
||||
from .teamfourstar import TeamFourStarIE
|
||||
from .techtalks import TechTalksIE
|
||||
from .ted import TEDIE
|
||||
from .tele13 import Tele13IE
|
||||
@ -1217,6 +1239,7 @@ from .vodlocker import VodlockerIE
|
||||
from .vodpl import VODPlIE
|
||||
from .vodplatform import VODPlatformIE
|
||||
from .voicerepublic import VoiceRepublicIE
|
||||
from .voot import VootIE
|
||||
from .voxmedia import VoxMediaIE
|
||||
from .vporn import VpornIE
|
||||
from .vrt import VRTIE
|
||||
@ -1238,6 +1261,7 @@ from .washingtonpost import (
|
||||
WashingtonPostArticleIE,
|
||||
)
|
||||
from .wat import WatIE
|
||||
from .watchbox import WatchBoxIE
|
||||
from .watchindianporn import WatchIndianPornIE
|
||||
from .wdr import (
|
||||
WDRIE,
|
||||
@ -1292,6 +1316,7 @@ from .yandexmusic import (
|
||||
YandexMusicAlbumIE,
|
||||
YandexMusicPlaylistIE,
|
||||
)
|
||||
from .yandexdisk import YandexDiskIE
|
||||
from .yesjapan import YesJapanIE
|
||||
from .yinyuetai import YinYueTaiIE
|
||||
from .ynet import YnetIE
|
||||
|
@ -43,7 +43,7 @@ class FiveTVIE(InfoExtractor):
|
||||
'info_dict': {
|
||||
'id': 'glavnoe',
|
||||
'ext': 'mp4',
|
||||
'title': 'Итоги недели с 8 по 14 июня 2015 года',
|
||||
'title': r're:^Итоги недели с \d+ по \d+ \w+ \d{4} года$',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
},
|
||||
}, {
|
||||
@ -70,7 +70,8 @@ class FiveTVIE(InfoExtractor):
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
video_url = self._search_regex(
|
||||
r'<a[^>]+?href="([^"]+)"[^>]+?class="videoplayer"',
|
||||
[r'<div[^>]+?class="flowplayer[^>]+?data-href="([^"]+)"',
|
||||
r'<a[^>]+?href="([^"]+)"[^>]+?class="videoplayer"'],
|
||||
webpage, 'video url')
|
||||
|
||||
title = self._og_search_title(webpage, default=None) or self._search_regex(
|
||||
|
@ -3,39 +3,22 @@ from __future__ import unicode_literals
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_urlparse
|
||||
from ..utils import (
|
||||
parse_duration,
|
||||
parse_iso8601,
|
||||
sanitized_Request,
|
||||
str_to_int,
|
||||
)
|
||||
|
||||
|
||||
class FourTubeIE(InfoExtractor):
|
||||
IE_NAME = '4tube'
|
||||
_VALID_URL = r'https?://(?:www\.)?4tube\.com/videos/(?P<id>\d+)'
|
||||
|
||||
_TEST = {
|
||||
'url': 'http://www.4tube.com/videos/209733/hot-babe-holly-michaels-gets-her-ass-stuffed-by-black',
|
||||
'md5': '6516c8ac63b03de06bc8eac14362db4f',
|
||||
'info_dict': {
|
||||
'id': '209733',
|
||||
'ext': 'mp4',
|
||||
'title': 'Hot Babe Holly Michaels gets her ass stuffed by black',
|
||||
'uploader': 'WCP Club',
|
||||
'uploader_id': 'wcp-club',
|
||||
'upload_date': '20131031',
|
||||
'timestamp': 1383263892,
|
||||
'duration': 583,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'categories': list,
|
||||
'age_limit': 18,
|
||||
}
|
||||
}
|
||||
|
||||
class FourTubeBaseIE(InfoExtractor):
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
kind, video_id, display_id = mobj.group('kind', 'id', 'display_id')
|
||||
|
||||
if kind == 'm' or not display_id:
|
||||
url = self._URL_TEMPLATE % video_id
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
title = self._html_search_meta('name', webpage)
|
||||
@ -43,10 +26,10 @@ class FourTubeIE(InfoExtractor):
|
||||
'uploadDate', webpage))
|
||||
thumbnail = self._html_search_meta('thumbnailUrl', webpage)
|
||||
uploader_id = self._html_search_regex(
|
||||
r'<a class="item-to-subscribe" href="[^"]+/channels/([^/"]+)" title="Go to [^"]+ page">',
|
||||
r'<a class="item-to-subscribe" href="[^"]+/(?:channel|user)s?/([^/"]+)" title="Go to [^"]+ page">',
|
||||
webpage, 'uploader id', fatal=False)
|
||||
uploader = self._html_search_regex(
|
||||
r'<a class="item-to-subscribe" href="[^"]+/channels/[^/"]+" title="Go to ([^"]+) page">',
|
||||
r'<a class="item-to-subscribe" href="[^"]+/(?:channel|user)s?/[^/"]+" title="Go to ([^"]+) page">',
|
||||
webpage, 'uploader', fatal=False)
|
||||
|
||||
categories_html = self._search_regex(
|
||||
@ -60,10 +43,10 @@ class FourTubeIE(InfoExtractor):
|
||||
|
||||
view_count = str_to_int(self._search_regex(
|
||||
r'<meta[^>]+itemprop="interactionCount"[^>]+content="UserPlays:([0-9,]+)">',
|
||||
webpage, 'view count', fatal=False))
|
||||
webpage, 'view count', default=None))
|
||||
like_count = str_to_int(self._search_regex(
|
||||
r'<meta[^>]+itemprop="interactionCount"[^>]+content="UserLikes:([0-9,]+)">',
|
||||
webpage, 'like count', fatal=False))
|
||||
webpage, 'like count', default=None))
|
||||
duration = parse_duration(self._html_search_meta('duration', webpage))
|
||||
|
||||
media_id = self._search_regex(
|
||||
@ -87,12 +70,12 @@ class FourTubeIE(InfoExtractor):
|
||||
|
||||
token_url = 'https://tkn.kodicdn.com/{0}/desktop/{1}'.format(
|
||||
media_id, '+'.join(sources))
|
||||
headers = {
|
||||
b'Content-Type': b'application/x-www-form-urlencoded',
|
||||
b'Origin': b'https://www.4tube.com',
|
||||
}
|
||||
token_req = sanitized_Request(token_url, b'{}', headers)
|
||||
tokens = self._download_json(token_req, video_id)
|
||||
|
||||
parsed_url = compat_urlparse.urlparse(url)
|
||||
tokens = self._download_json(token_url, video_id, data=b'', headers={
|
||||
'Origin': '%s://%s' % (parsed_url.scheme, parsed_url.hostname),
|
||||
'Referer': url,
|
||||
})
|
||||
formats = [{
|
||||
'url': tokens[format]['token'],
|
||||
'format_id': format + 'p',
|
||||
@ -115,3 +98,126 @@ class FourTubeIE(InfoExtractor):
|
||||
'duration': duration,
|
||||
'age_limit': 18,
|
||||
}
|
||||
|
||||
|
||||
class FourTubeIE(FourTubeBaseIE):
|
||||
IE_NAME = '4tube'
|
||||
_VALID_URL = r'https?://(?:(?P<kind>www|m)\.)?4tube\.com/(?:videos|embed)/(?P<id>\d+)(?:/(?P<display_id>[^/?#&]+))?'
|
||||
_URL_TEMPLATE = 'https://www.4tube.com/videos/%s/video'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.4tube.com/videos/209733/hot-babe-holly-michaels-gets-her-ass-stuffed-by-black',
|
||||
'md5': '6516c8ac63b03de06bc8eac14362db4f',
|
||||
'info_dict': {
|
||||
'id': '209733',
|
||||
'ext': 'mp4',
|
||||
'title': 'Hot Babe Holly Michaels gets her ass stuffed by black',
|
||||
'uploader': 'WCP Club',
|
||||
'uploader_id': 'wcp-club',
|
||||
'upload_date': '20131031',
|
||||
'timestamp': 1383263892,
|
||||
'duration': 583,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'categories': list,
|
||||
'age_limit': 18,
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.4tube.com/embed/209733',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://m.4tube.com/videos/209733/hot-babe-holly-michaels-gets-her-ass-stuffed-by-black',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
|
||||
class FuxIE(FourTubeBaseIE):
|
||||
_VALID_URL = r'https?://(?:(?P<kind>www|m)\.)?fux\.com/(?:video|embed)/(?P<id>\d+)(?:/(?P<display_id>[^/?#&]+))?'
|
||||
_URL_TEMPLATE = 'https://www.fux.com/video/%s/video'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.fux.com/video/195359/awesome-fucking-kitchen-ends-cum-swallow',
|
||||
'info_dict': {
|
||||
'id': '195359',
|
||||
'ext': 'mp4',
|
||||
'title': 'Awesome fucking in the kitchen ends with cum swallow',
|
||||
'uploader': 'alenci2342',
|
||||
'uploader_id': 'alenci2342',
|
||||
'upload_date': '20131230',
|
||||
'timestamp': 1388361660,
|
||||
'duration': 289,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'categories': list,
|
||||
'age_limit': 18,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.fux.com/embed/195359',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.fux.com/video/195359/awesome-fucking-kitchen-ends-cum-swallow',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
|
||||
class PornTubeIE(FourTubeBaseIE):
|
||||
_VALID_URL = r'https?://(?:(?P<kind>www|m)\.)?porntube\.com/(?:videos/(?P<display_id>[^/]+)_|embed/)(?P<id>\d+)'
|
||||
_URL_TEMPLATE = 'https://www.porntube.com/videos/video_%s'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.porntube.com/videos/teen-couple-doing-anal_7089759',
|
||||
'info_dict': {
|
||||
'id': '7089759',
|
||||
'ext': 'mp4',
|
||||
'title': 'Teen couple doing anal',
|
||||
'uploader': 'Alexy',
|
||||
'uploader_id': 'Alexy',
|
||||
'upload_date': '20150606',
|
||||
'timestamp': 1433595647,
|
||||
'duration': 5052,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'categories': list,
|
||||
'age_limit': 18,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.porntube.com/embed/7089759',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://m.porntube.com/videos/teen-couple-doing-anal_7089759',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
|
||||
class PornerBrosIE(FourTubeBaseIE):
|
||||
_VALID_URL = r'https?://(?:(?P<kind>www|m)\.)?pornerbros\.com/(?:videos/(?P<display_id>[^/]+)_|embed/)(?P<id>\d+)'
|
||||
_URL_TEMPLATE = 'https://www.pornerbros.com/videos/video_%s'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.pornerbros.com/videos/skinny-brunette-takes-big-cock-down-her-anal-hole_181369',
|
||||
'md5': '6516c8ac63b03de06bc8eac14362db4f',
|
||||
'info_dict': {
|
||||
'id': '181369',
|
||||
'ext': 'mp4',
|
||||
'title': 'Skinny brunette takes big cock down her anal hole',
|
||||
'uploader': 'PornerBros HD',
|
||||
'uploader_id': 'pornerbros-hd',
|
||||
'upload_date': '20130130',
|
||||
'timestamp': 1359527401,
|
||||
'duration': 1224,
|
||||
'view_count': int,
|
||||
'categories': list,
|
||||
'age_limit': 18,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.pornerbros.com/embed/181369',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://m.pornerbros.com/videos/skinny-brunette-takes-big-cock-down-her-anal-hole_181369',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
@ -1,10 +1,14 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import json
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import ExtractorError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
unified_timestamp,
|
||||
)
|
||||
|
||||
|
||||
class FunnyOrDieIE(InfoExtractor):
|
||||
@ -18,6 +22,10 @@ class FunnyOrDieIE(InfoExtractor):
|
||||
'title': 'Heart-Shaped Box: Literal Video Version',
|
||||
'description': 'md5:ea09a01bc9a1c46d9ab696c01747c338',
|
||||
'thumbnail': r're:^http:.*\.jpg$',
|
||||
'uploader': 'DASjr',
|
||||
'timestamp': 1317904928,
|
||||
'upload_date': '20111006',
|
||||
'duration': 318.3,
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.funnyordie.com/embed/e402820827',
|
||||
@ -27,6 +35,8 @@ class FunnyOrDieIE(InfoExtractor):
|
||||
'title': 'Please Use This Song (Jon Lajoie)',
|
||||
'description': 'Please use this to sell something. www.jonlajoie.com',
|
||||
'thumbnail': r're:^http:.*\.jpg$',
|
||||
'timestamp': 1398988800,
|
||||
'upload_date': '20140502',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
@ -100,15 +110,53 @@ class FunnyOrDieIE(InfoExtractor):
|
||||
'url': 'http://www.funnyordie.com%s' % src,
|
||||
}]
|
||||
|
||||
post_json = self._search_regex(
|
||||
r'fb_post\s*=\s*(\{.*?\});', webpage, 'post details')
|
||||
post = json.loads(post_json)
|
||||
timestamp = unified_timestamp(self._html_search_meta(
|
||||
'uploadDate', webpage, 'timestamp', default=None))
|
||||
|
||||
uploader = self._html_search_regex(
|
||||
r'<h\d[^>]+\bclass=["\']channel-preview-name[^>]+>(.+?)</h',
|
||||
webpage, 'uploader', default=None)
|
||||
|
||||
title, description, thumbnail, duration = [None] * 4
|
||||
|
||||
medium = self._parse_json(
|
||||
self._search_regex(
|
||||
r'jsonMedium\s*=\s*({.+?});', webpage, 'JSON medium',
|
||||
default='{}'),
|
||||
video_id, fatal=False)
|
||||
if medium:
|
||||
title = medium.get('title')
|
||||
duration = float_or_none(medium.get('duration'))
|
||||
if not timestamp:
|
||||
timestamp = unified_timestamp(medium.get('publishDate'))
|
||||
|
||||
post = self._parse_json(
|
||||
self._search_regex(
|
||||
r'fb_post\s*=\s*(\{.*?\});', webpage, 'post details',
|
||||
default='{}'),
|
||||
video_id, fatal=False)
|
||||
if post:
|
||||
if not title:
|
||||
title = post.get('name')
|
||||
description = post.get('description')
|
||||
thumbnail = post.get('picture')
|
||||
|
||||
if not title:
|
||||
title = self._og_search_title(webpage)
|
||||
if not description:
|
||||
description = self._og_search_description(webpage)
|
||||
if not duration:
|
||||
duration = int_or_none(self._html_search_meta(
|
||||
('video:duration', 'duration'), webpage, 'duration', default=False))
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': post['name'],
|
||||
'description': post.get('description'),
|
||||
'thumbnail': post.get('picture'),
|
||||
'title': title,
|
||||
'description': description,
|
||||
'thumbnail': thumbnail,
|
||||
'uploader': uploader,
|
||||
'timestamp': timestamp,
|
||||
'duration': duration,
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
}
|
||||
|
@ -36,6 +36,10 @@ from .brightcove import (
|
||||
BrightcoveLegacyIE,
|
||||
BrightcoveNewIE,
|
||||
)
|
||||
from .nexx import (
|
||||
NexxIE,
|
||||
NexxEmbedIE,
|
||||
)
|
||||
from .nbc import NBCSportsVPlayerIE
|
||||
from .ooyala import OoyalaIE
|
||||
from .rutv import RUTVIE
|
||||
@ -93,6 +97,8 @@ from .washingtonpost import WashingtonPostIE
|
||||
from .wistia import WistiaIE
|
||||
from .mediaset import MediasetIE
|
||||
from .joj import JojIE
|
||||
from .megaphone import MegaphoneIE
|
||||
from .vzaar import VzaarIE
|
||||
|
||||
|
||||
class GenericIE(InfoExtractor):
|
||||
@ -570,6 +576,19 @@ class GenericIE(InfoExtractor):
|
||||
},
|
||||
'skip': 'movie expired',
|
||||
},
|
||||
# ooyala video embedded with http://player.ooyala.com/static/v4/production/latest/core.min.js
|
||||
{
|
||||
'url': 'http://wnep.com/2017/07/22/steampunk-fest-comes-to-honesdale/',
|
||||
'info_dict': {
|
||||
'id': 'lwYWYxYzE6V5uJMjNGyKtwwiw9ZJD7t2',
|
||||
'ext': 'mp4',
|
||||
'title': 'Steampunk Fest Comes to Honesdale',
|
||||
'duration': 43.276,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
}
|
||||
},
|
||||
# embed.ly video
|
||||
{
|
||||
'url': 'http://www.tested.com/science/weird/460206-tested-grinding-coffee-2000-frames-second/',
|
||||
@ -1500,14 +1519,27 @@ class GenericIE(InfoExtractor):
|
||||
# LiveLeak embed
|
||||
{
|
||||
'url': 'http://www.wykop.pl/link/3088787/',
|
||||
'md5': 'ace83b9ed19b21f68e1b50e844fdf95d',
|
||||
'md5': '7619da8c820e835bef21a1efa2a0fc71',
|
||||
'info_dict': {
|
||||
'id': '874_1459135191',
|
||||
'ext': 'mp4',
|
||||
'title': 'Man shows poor quality of new apartment building',
|
||||
'description': 'The wall is like a sand pile.',
|
||||
'uploader': 'Lake8737',
|
||||
}
|
||||
},
|
||||
'add_ie': [LiveLeakIE.ie_key()],
|
||||
},
|
||||
# Another LiveLeak embed pattern (#13336)
|
||||
{
|
||||
'url': 'https://milo.yiannopoulos.net/2017/06/concealed-carry-robbery/',
|
||||
'info_dict': {
|
||||
'id': '2eb_1496309988',
|
||||
'ext': 'mp4',
|
||||
'title': 'Thief robs place where everyone was armed',
|
||||
'description': 'md5:694d73ee79e535953cf2488562288eee',
|
||||
'uploader': 'brazilwtf',
|
||||
},
|
||||
'add_ie': [LiveLeakIE.ie_key()],
|
||||
},
|
||||
# Duplicated embedded video URLs
|
||||
{
|
||||
@ -1549,6 +1581,22 @@ class GenericIE(InfoExtractor):
|
||||
},
|
||||
'add_ie': ['BrightcoveLegacy'],
|
||||
},
|
||||
# Nexx embed
|
||||
{
|
||||
'url': 'https://www.funk.net/serien/5940e15073f6120001657956/items/593efbb173f6120001657503',
|
||||
'info_dict': {
|
||||
'id': '247746',
|
||||
'ext': 'mp4',
|
||||
'title': "Yesterday's Jam (OV)",
|
||||
'description': 'md5:09bc0984723fed34e2581624a84e05f0',
|
||||
'timestamp': 1492594816,
|
||||
'upload_date': '20170419',
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
'skip_download': True,
|
||||
},
|
||||
},
|
||||
# Facebook <iframe> embed
|
||||
{
|
||||
'url': 'https://www.hostblogger.de/blog/archives/6181-Auto-jagt-Betonmischer.html',
|
||||
@ -1750,6 +1798,21 @@ class GenericIE(InfoExtractor):
|
||||
},
|
||||
'playlist_mincount': 5,
|
||||
},
|
||||
{
|
||||
# Limelight embed (LimelightPlayerUtil.embed)
|
||||
'url': 'https://tv5.ca/videos?v=xuu8qowr291ri',
|
||||
'info_dict': {
|
||||
'id': '95d035dc5c8a401588e9c0e6bd1e9c92',
|
||||
'ext': 'mp4',
|
||||
'title': '07448641',
|
||||
'timestamp': 1499890639,
|
||||
'upload_date': '20170712',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
'add_ie': ['LimelightMedia'],
|
||||
},
|
||||
{
|
||||
'url': 'http://kron4.com/2017/04/28/standoff-with-walnut-creek-murder-suspect-ends-with-arrest/',
|
||||
'info_dict': {
|
||||
@ -1806,6 +1869,16 @@ class GenericIE(InfoExtractor):
|
||||
'title': 'Стас Намин: «Мы нарушили девственность Кремля»',
|
||||
},
|
||||
},
|
||||
{
|
||||
# vzaar embed
|
||||
'url': 'http://help.vzaar.com/article/165-embedding-video',
|
||||
'md5': '7e3919d9d2620b89e3e00bec7fe8c9d4',
|
||||
'info_dict': {
|
||||
'id': '8707641',
|
||||
'ext': 'mp4',
|
||||
'title': 'Building A Business Online: Principal Chairs Q & A',
|
||||
},
|
||||
},
|
||||
# {
|
||||
# # TODO: find another test
|
||||
# # http://schema.org/VideoObject
|
||||
@ -1955,7 +2028,7 @@ class GenericIE(InfoExtractor):
|
||||
|
||||
if head_response is not False:
|
||||
# Check for redirect
|
||||
new_url = head_response.geturl()
|
||||
new_url = compat_str(head_response.geturl())
|
||||
if url != new_url:
|
||||
self.report_following_redirect(new_url)
|
||||
if force_videoid:
|
||||
@ -2056,7 +2129,7 @@ class GenericIE(InfoExtractor):
|
||||
elif re.match(r'(?i)^(?:{[^}]+})?MPD$', doc.tag):
|
||||
info_dict['formats'] = self._parse_mpd_formats(
|
||||
doc, video_id,
|
||||
mpd_base_url=full_response.geturl().rpartition('/')[0],
|
||||
mpd_base_url=compat_str(full_response.geturl()).rpartition('/')[0],
|
||||
mpd_url=url)
|
||||
self._sort_formats(info_dict['formats'])
|
||||
return info_dict
|
||||
@ -2133,6 +2206,16 @@ class GenericIE(InfoExtractor):
|
||||
if bc_urls:
|
||||
return self.playlist_from_matches(bc_urls, video_id, video_title, ie='BrightcoveNew')
|
||||
|
||||
# Look for Nexx embeds
|
||||
nexx_urls = NexxIE._extract_urls(webpage)
|
||||
if nexx_urls:
|
||||
return self.playlist_from_matches(nexx_urls, video_id, video_title, ie=NexxIE.ie_key())
|
||||
|
||||
# Look for Nexx iFrame embeds
|
||||
nexx_embed_urls = NexxEmbedIE._extract_urls(webpage)
|
||||
if nexx_embed_urls:
|
||||
return self.playlist_from_matches(nexx_embed_urls, video_id, video_title, ie=NexxEmbedIE.ie_key())
|
||||
|
||||
# Look for ThePlatform embeds
|
||||
tp_urls = ThePlatformIE._extract_urls(webpage)
|
||||
if tp_urls:
|
||||
@ -2262,6 +2345,7 @@ class GenericIE(InfoExtractor):
|
||||
# Look for Ooyala videos
|
||||
mobj = (re.search(r'player\.ooyala\.com/[^"?]+[?#][^"]*?(?:embedCode|ec)=(?P<ec>[^"&]+)', webpage) or
|
||||
re.search(r'OO\.Player\.create\([\'"].*?[\'"],\s*[\'"](?P<ec>.{32})[\'"]', webpage) or
|
||||
re.search(r'OO\.Player\.create\.apply\(\s*OO\.Player\s*,\s*op\(\s*\[\s*[\'"][^\'"]*[\'"]\s*,\s*[\'"](?P<ec>.{32})[\'"]', webpage) or
|
||||
re.search(r'SBN\.VideoLinkset\.ooyala\([\'"](?P<ec>.{32})[\'"]\)', webpage) or
|
||||
re.search(r'data-ooyala-video-id\s*=\s*[\'"](?P<ec>.{32})[\'"]', webpage))
|
||||
if mobj is not None:
|
||||
@ -2686,9 +2770,9 @@ class GenericIE(InfoExtractor):
|
||||
self._proto_relative_url(instagram_embed_url), InstagramIE.ie_key())
|
||||
|
||||
# Look for LiveLeak embeds
|
||||
liveleak_url = LiveLeakIE._extract_url(webpage)
|
||||
if liveleak_url:
|
||||
return self.url_result(liveleak_url, 'LiveLeak')
|
||||
liveleak_urls = LiveLeakIE._extract_urls(webpage)
|
||||
if liveleak_urls:
|
||||
return self.playlist_from_matches(liveleak_urls, video_id, video_title)
|
||||
|
||||
# Look for 3Q SDN embeds
|
||||
threeqsdn_url = ThreeQSDNIE._extract_url(webpage)
|
||||
@ -2740,7 +2824,7 @@ class GenericIE(InfoExtractor):
|
||||
rutube_urls = RutubeIE._extract_urls(webpage)
|
||||
if rutube_urls:
|
||||
return self.playlist_from_matches(
|
||||
rutube_urls, ie=RutubeIE.ie_key())
|
||||
rutube_urls, video_id, video_title, ie=RutubeIE.ie_key())
|
||||
|
||||
# Look for WashingtonPost embeds
|
||||
wapo_urls = WashingtonPostIE._extract_urls(webpage)
|
||||
@ -2760,6 +2844,18 @@ class GenericIE(InfoExtractor):
|
||||
return self.playlist_from_matches(
|
||||
joj_urls, video_id, video_title, ie=JojIE.ie_key())
|
||||
|
||||
# Look for megaphone.fm embeds
|
||||
mpfn_urls = MegaphoneIE._extract_urls(webpage)
|
||||
if mpfn_urls:
|
||||
return self.playlist_from_matches(
|
||||
mpfn_urls, video_id, video_title, ie=MegaphoneIE.ie_key())
|
||||
|
||||
# Look for vzaar embeds
|
||||
vzaar_urls = VzaarIE._extract_urls(webpage)
|
||||
if vzaar_urls:
|
||||
return self.playlist_from_matches(
|
||||
vzaar_urls, video_id, video_title, ie=VzaarIE.ie_key())
|
||||
|
||||
def merge_dicts(dict1, dict2):
|
||||
merged = {}
|
||||
for k, v in dict1.items():
|
||||
|
@ -7,6 +7,7 @@ from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
lowercase_escape,
|
||||
update_url_query,
|
||||
)
|
||||
|
||||
|
||||
@ -24,7 +25,14 @@ class GoogleDriveIE(InfoExtractor):
|
||||
}, {
|
||||
# video id is longer than 28 characters
|
||||
'url': 'https://drive.google.com/file/d/1ENcQ_jeCuj7y19s66_Ou9dRP4GKGsodiDQ/edit',
|
||||
'only_matching': True,
|
||||
'md5': 'c230c67252874fddd8170e3fd1a45886',
|
||||
'info_dict': {
|
||||
'id': '1ENcQ_jeCuj7y19s66_Ou9dRP4GKGsodiDQ',
|
||||
'ext': 'mp4',
|
||||
'title': 'Andreea Banica feat Smiley - Hooky Song (Official Video).mp4',
|
||||
'duration': 189,
|
||||
},
|
||||
'only_matching': True
|
||||
}]
|
||||
_FORMATS_EXT = {
|
||||
'5': 'flv',
|
||||
@ -44,6 +52,13 @@ class GoogleDriveIE(InfoExtractor):
|
||||
'46': 'webm',
|
||||
'59': 'mp4',
|
||||
}
|
||||
_BASE_URL_CAPTIONS = 'https://drive.google.com/timedtext'
|
||||
_CAPTIONS_ENTRY_TAG = {
|
||||
'subtitles': 'track',
|
||||
'automatic_captions': 'target',
|
||||
}
|
||||
_caption_formats_ext = []
|
||||
_captions_xml = None
|
||||
|
||||
@staticmethod
|
||||
def _extract_url(webpage):
|
||||
@ -53,21 +68,99 @@ class GoogleDriveIE(InfoExtractor):
|
||||
if mobj:
|
||||
return 'https://drive.google.com/file/d/%s' % mobj.group('id')
|
||||
|
||||
def _download_subtitles_xml(self, video_id, subtitles_id, hl):
|
||||
if self._captions_xml:
|
||||
return
|
||||
self._captions_xml = self._download_xml(
|
||||
self._BASE_URL_CAPTIONS, video_id, query={
|
||||
'id': video_id,
|
||||
'vid': subtitles_id,
|
||||
'hl': hl,
|
||||
'v': video_id,
|
||||
'type': 'list',
|
||||
'tlangs': '1',
|
||||
'fmts': '1',
|
||||
'vssids': '1',
|
||||
}, note='Downloading subtitles XML',
|
||||
errnote='Unable to download subtitles XML', fatal=False)
|
||||
if self._captions_xml:
|
||||
for f in self._captions_xml.findall('format'):
|
||||
if f.attrib.get('fmt_code') and not f.attrib.get('default'):
|
||||
self._caption_formats_ext.append(f.attrib['fmt_code'])
|
||||
|
||||
def _get_captions_by_type(self, video_id, subtitles_id, caption_type,
|
||||
origin_lang_code=None):
|
||||
if not subtitles_id or not caption_type:
|
||||
return
|
||||
captions = {}
|
||||
for caption_entry in self._captions_xml.findall(
|
||||
self._CAPTIONS_ENTRY_TAG[caption_type]):
|
||||
caption_lang_code = caption_entry.attrib.get('lang_code')
|
||||
if not caption_lang_code:
|
||||
continue
|
||||
caption_format_data = []
|
||||
for caption_format in self._caption_formats_ext:
|
||||
query = {
|
||||
'vid': subtitles_id,
|
||||
'v': video_id,
|
||||
'fmt': caption_format,
|
||||
'lang': (caption_lang_code if origin_lang_code is None
|
||||
else origin_lang_code),
|
||||
'type': 'track',
|
||||
'name': '',
|
||||
'kind': '',
|
||||
}
|
||||
if origin_lang_code is not None:
|
||||
query.update({'tlang': caption_lang_code})
|
||||
caption_format_data.append({
|
||||
'url': update_url_query(self._BASE_URL_CAPTIONS, query),
|
||||
'ext': caption_format,
|
||||
})
|
||||
captions[caption_lang_code] = caption_format_data
|
||||
return captions
|
||||
|
||||
def _get_subtitles(self, video_id, subtitles_id, hl):
|
||||
if not subtitles_id or not hl:
|
||||
return
|
||||
self._download_subtitles_xml(video_id, subtitles_id, hl)
|
||||
if not self._captions_xml:
|
||||
return
|
||||
return self._get_captions_by_type(video_id, subtitles_id, 'subtitles')
|
||||
|
||||
def _get_automatic_captions(self, video_id, subtitles_id, hl):
|
||||
if not subtitles_id or not hl:
|
||||
return
|
||||
self._download_subtitles_xml(video_id, subtitles_id, hl)
|
||||
if not self._captions_xml:
|
||||
return
|
||||
track = self._captions_xml.find('track')
|
||||
if track is None:
|
||||
return
|
||||
origin_lang_code = track.attrib.get('lang_code')
|
||||
if not origin_lang_code:
|
||||
return
|
||||
return self._get_captions_by_type(
|
||||
video_id, subtitles_id, 'automatic_captions', origin_lang_code)
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(
|
||||
'http://docs.google.com/file/d/%s' % video_id, video_id)
|
||||
|
||||
reason = self._search_regex(r'"reason"\s*,\s*"([^"]+)', webpage, 'reason', default=None)
|
||||
reason = self._search_regex(
|
||||
r'"reason"\s*,\s*"([^"]+)', webpage, 'reason', default=None)
|
||||
if reason:
|
||||
raise ExtractorError(reason)
|
||||
|
||||
title = self._search_regex(r'"title"\s*,\s*"([^"]+)', webpage, 'title')
|
||||
duration = int_or_none(self._search_regex(
|
||||
r'"length_seconds"\s*,\s*"([^"]+)', webpage, 'length seconds', default=None))
|
||||
r'"length_seconds"\s*,\s*"([^"]+)', webpage, 'length seconds',
|
||||
default=None))
|
||||
fmt_stream_map = self._search_regex(
|
||||
r'"fmt_stream_map"\s*,\s*"([^"]+)', webpage, 'fmt stream map').split(',')
|
||||
fmt_list = self._search_regex(r'"fmt_list"\s*,\s*"([^"]+)', webpage, 'fmt_list').split(',')
|
||||
r'"fmt_stream_map"\s*,\s*"([^"]+)', webpage,
|
||||
'fmt stream map').split(',')
|
||||
fmt_list = self._search_regex(
|
||||
r'"fmt_list"\s*,\s*"([^"]+)', webpage, 'fmt_list').split(',')
|
||||
|
||||
resolutions = {}
|
||||
for fmt in fmt_list:
|
||||
@ -97,10 +190,24 @@ class GoogleDriveIE(InfoExtractor):
|
||||
formats.append(f)
|
||||
self._sort_formats(formats)
|
||||
|
||||
hl = self._search_regex(
|
||||
r'"hl"\s*,\s*"([^"]+)', webpage, 'hl', default=None)
|
||||
subtitles_id = None
|
||||
ttsurl = self._search_regex(
|
||||
r'"ttsurl"\s*,\s*"([^"]+)', webpage, 'ttsurl', default=None)
|
||||
if ttsurl:
|
||||
# the video Id for subtitles will be the last value in the ttsurl
|
||||
# query string
|
||||
subtitles_id = ttsurl.encode('utf-8').decode(
|
||||
'unicode_escape').split('=')[-1]
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'thumbnail': self._og_search_thumbnail(webpage, default=None),
|
||||
'duration': duration,
|
||||
'formats': formats,
|
||||
'subtitles': self.extract_subtitles(video_id, subtitles_id, hl),
|
||||
'automatic_captions': self.extract_automatic_captions(
|
||||
video_id, subtitles_id, hl),
|
||||
}
|
||||
|
@ -59,12 +59,18 @@ class ITVIE(InfoExtractor):
|
||||
def _add_sub_element(element, name):
|
||||
return etree.SubElement(element, _add_ns(name))
|
||||
|
||||
production_id = (
|
||||
params.get('data-video-autoplay-id') or
|
||||
'%s#001' % (
|
||||
params.get('data-video-episode-id') or
|
||||
video_id.replace('a', '/')))
|
||||
|
||||
req_env = etree.Element(_add_ns('soapenv:Envelope'))
|
||||
_add_sub_element(req_env, 'soapenv:Header')
|
||||
body = _add_sub_element(req_env, 'soapenv:Body')
|
||||
get_playlist = _add_sub_element(body, ('tem:GetPlaylist'))
|
||||
request = _add_sub_element(get_playlist, 'tem:request')
|
||||
_add_sub_element(request, 'itv:ProductionId').text = params['data-video-id']
|
||||
_add_sub_element(request, 'itv:ProductionId').text = production_id
|
||||
_add_sub_element(request, 'itv:RequestGuid').text = compat_str(uuid.uuid4()).upper()
|
||||
vodcrid = _add_sub_element(request, 'itv:Vodcrid')
|
||||
_add_sub_element(vodcrid, 'com:Id')
|
||||
|
@ -48,7 +48,7 @@ class KarriereVideosIE(InfoExtractor):
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
title = (self._html_search_meta('title', webpage, default=None) or
|
||||
self._search_regex(r'<h1 class="title">([^<]+)</h1>'))
|
||||
self._search_regex(r'<h1 class="title">([^<]+)</h1>', webpage, 'video title'))
|
||||
|
||||
video_id = self._search_regex(
|
||||
r'/config/video/(.+?)\.xml', webpage, 'video id')
|
||||
|
@ -215,3 +215,21 @@ class Laola1TvIE(Laola1TvEmbedIE):
|
||||
'formats': formats,
|
||||
'is_live': is_live,
|
||||
}
|
||||
|
||||
|
||||
class ITTFIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://tv\.ittf\.com/video/[^/]+/(?P<id>\d+)'
|
||||
_TEST = {
|
||||
'url': 'https://tv.ittf.com/video/peng-wang-wei-matsudaira-kenta/951802',
|
||||
'only_matching': True,
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
return self.url_result(
|
||||
update_url_query('https://www.laola1.tv/titanplayer.php', {
|
||||
'videoid': self._match_id(url),
|
||||
'type': 'V',
|
||||
'lang': 'en',
|
||||
'portal': 'int',
|
||||
'customer': 1024,
|
||||
}), Laola1TvEmbedIE.ie_key())
|
||||
|
@ -26,14 +26,16 @@ class LimelightBaseIE(InfoExtractor):
|
||||
'Channel': 'channel',
|
||||
'ChannelList': 'channel_list',
|
||||
}
|
||||
|
||||
def smuggle(url):
|
||||
return smuggle_url(url, {'source_url': source_url})
|
||||
|
||||
entries = []
|
||||
for kind, video_id in re.findall(
|
||||
r'LimelightPlayer\.doLoad(Media|Channel|ChannelList)\(["\'](?P<id>[a-z0-9]{32})',
|
||||
webpage):
|
||||
entries.append(cls.url_result(
|
||||
smuggle_url(
|
||||
'limelight:%s:%s' % (lm[kind], video_id),
|
||||
{'source_url': source_url}),
|
||||
smuggle('limelight:%s:%s' % (lm[kind], video_id)),
|
||||
'Limelight%s' % kind, video_id))
|
||||
for mobj in re.finditer(
|
||||
# As per [1] class attribute should be exactly equal to
|
||||
@ -49,10 +51,15 @@ class LimelightBaseIE(InfoExtractor):
|
||||
''', webpage):
|
||||
kind, video_id = mobj.group('kind'), mobj.group('id')
|
||||
entries.append(cls.url_result(
|
||||
smuggle_url(
|
||||
'limelight:%s:%s' % (kind, video_id),
|
||||
{'source_url': source_url}),
|
||||
smuggle('limelight:%s:%s' % (kind, video_id)),
|
||||
'Limelight%s' % kind.capitalize(), video_id))
|
||||
# http://support.3playmedia.com/hc/en-us/articles/115009517327-Limelight-Embedding-the-Audio-Description-Plugin-with-the-Limelight-Player-on-Your-Web-Page)
|
||||
for video_id in re.findall(
|
||||
r'(?s)LimelightPlayerUtil\.embed\s*\(\s*{.*?\bmediaId["\']\s*:\s*["\'](?P<id>[a-z0-9]{32})',
|
||||
webpage):
|
||||
entries.append(cls.url_result(
|
||||
smuggle('limelight:media:%s' % video_id),
|
||||
LimelightMediaIE.ie_key(), video_id))
|
||||
return entries
|
||||
|
||||
def _call_playlist_service(self, item_id, method, fatal=True, referer=None):
|
||||
|
@ -72,15 +72,20 @@ class LiveLeakIE(InfoExtractor):
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.liveleak.com/view?i=677_1439397581',
|
||||
'info_dict': {
|
||||
'id': '677_1439397581',
|
||||
'title': 'Fuel Depot in China Explosion caught on video',
|
||||
},
|
||||
'playlist_count': 3,
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
def _extract_url(webpage):
|
||||
mobj = re.search(
|
||||
r'<iframe[^>]+src="https?://(?:\w+\.)?liveleak\.com/ll_embed\?(?:.*?)i=(?P<id>[\w_]+)(?:.*)',
|
||||
def _extract_urls(webpage):
|
||||
return re.findall(
|
||||
r'<iframe[^>]+src="(https?://(?:\w+\.)?liveleak\.com/ll_embed\?[^"]*[if]=[\w_]+[^"]+)"',
|
||||
webpage)
|
||||
if mobj:
|
||||
return 'http://www.liveleak.com/view?i=%s' % mobj.group('id')
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
@ -111,23 +116,54 @@ class LiveLeakIE(InfoExtractor):
|
||||
'age_limit': age_limit,
|
||||
}
|
||||
|
||||
info_dict = entries[0]
|
||||
for idx, info_dict in enumerate(entries):
|
||||
for a_format in info_dict['formats']:
|
||||
if not a_format.get('height'):
|
||||
a_format['height'] = int_or_none(self._search_regex(
|
||||
r'([0-9]+)p\.mp4', a_format['url'], 'height label',
|
||||
default=None))
|
||||
|
||||
for a_format in info_dict['formats']:
|
||||
if not a_format.get('height'):
|
||||
a_format['height'] = int_or_none(self._search_regex(
|
||||
r'([0-9]+)p\.mp4', a_format['url'], 'height label',
|
||||
default=None))
|
||||
self._sort_formats(info_dict['formats'])
|
||||
|
||||
self._sort_formats(info_dict['formats'])
|
||||
# Don't append entry ID for one-video pages to keep backward compatibility
|
||||
if len(entries) > 1:
|
||||
info_dict['id'] = '%s_%s' % (video_id, idx + 1)
|
||||
else:
|
||||
info_dict['id'] = video_id
|
||||
|
||||
info_dict.update({
|
||||
'id': video_id,
|
||||
'title': video_title,
|
||||
'description': video_description,
|
||||
'uploader': video_uploader,
|
||||
'age_limit': age_limit,
|
||||
'thumbnail': video_thumbnail,
|
||||
})
|
||||
info_dict.update({
|
||||
'title': video_title,
|
||||
'description': video_description,
|
||||
'uploader': video_uploader,
|
||||
'age_limit': age_limit,
|
||||
'thumbnail': video_thumbnail,
|
||||
})
|
||||
|
||||
return info_dict
|
||||
return self.playlist_result(entries, video_id, video_title)
|
||||
|
||||
|
||||
class LiveLeakEmbedIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?liveleak\.com/ll_embed\?.*?\b(?P<kind>[if])=(?P<id>[\w_]+)'
|
||||
|
||||
# See generic.py for actual test cases
|
||||
_TESTS = [{
|
||||
'url': 'https://www.liveleak.com/ll_embed?i=874_1459135191',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.liveleak.com/ll_embed?f=ab065df993c1',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
kind, video_id = mobj.group('kind', 'id')
|
||||
|
||||
if kind == 'f':
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
liveleak_url = self._search_regex(
|
||||
r'logourl\s*:\s*(?P<q1>[\'"])(?P<url>%s)(?P=q1)' % LiveLeakIE._VALID_URL,
|
||||
webpage, 'LiveLeak URL', group='url')
|
||||
elif kind == 'i':
|
||||
liveleak_url = 'http://www.liveleak.com/view?i=%s' % video_id
|
||||
|
||||
return self.url_result(liveleak_url, ie=LiveLeakIE.ie_key())
|
||||
|
55
youtube_dl/extractor/megaphone.py
Normal file
55
youtube_dl/extractor/megaphone.py
Normal file
@ -0,0 +1,55 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import js_to_json
|
||||
|
||||
|
||||
class MegaphoneIE(InfoExtractor):
|
||||
IE_NAME = 'megaphone.fm'
|
||||
IE_DESC = 'megaphone.fm embedded players'
|
||||
_VALID_URL = r'https://player\.megaphone\.fm/(?P<id>[A-Z0-9]+)'
|
||||
_TEST = {
|
||||
'url': 'https://player.megaphone.fm/GLT9749789991?"',
|
||||
'md5': '4816a0de523eb3e972dc0dda2c191f96',
|
||||
'info_dict': {
|
||||
'id': 'GLT9749789991',
|
||||
'ext': 'mp3',
|
||||
'title': '#97 What Kind Of Idiot Gets Phished?',
|
||||
'thumbnail': 're:^https://.*\.png.*$',
|
||||
'duration': 1776.26375,
|
||||
'author': 'Reply All',
|
||||
},
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
title = self._og_search_property('audio:title', webpage)
|
||||
author = self._og_search_property('audio:artist', webpage)
|
||||
thumbnail = self._og_search_thumbnail(webpage)
|
||||
|
||||
episode_json = self._search_regex(r'(?s)var\s+episode\s*=\s*(\{.+?\});', webpage, 'episode JSON')
|
||||
episode_data = self._parse_json(episode_json, video_id, js_to_json)
|
||||
video_url = self._proto_relative_url(episode_data['mediaUrl'], 'https:')
|
||||
|
||||
formats = [{
|
||||
'url': video_url,
|
||||
}]
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'thumbnail': thumbnail,
|
||||
'title': title,
|
||||
'author': author,
|
||||
'duration': episode_data['duration'],
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def _extract_urls(cls, webpage):
|
||||
return [m[0] for m in re.findall(
|
||||
r'<iframe[^>]*?\ssrc=["\'](%s)' % cls._VALID_URL, webpage)]
|
@ -9,6 +9,7 @@ from .common import InfoExtractor
|
||||
from ..compat import (
|
||||
compat_chr,
|
||||
compat_ord,
|
||||
compat_str,
|
||||
compat_urllib_parse_unquote,
|
||||
compat_urlparse,
|
||||
)
|
||||
@ -53,16 +54,27 @@ class MixcloudIE(InfoExtractor):
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
_keys = [
|
||||
'return { requestAnimationFrame: function(callback) { callback(); }, innerHeight: 500 };',
|
||||
'pleasedontdownloadourmusictheartistswontgetpaid',
|
||||
'window.addEventListener = window.addEventListener || function() {};',
|
||||
'(function() { return new Date().toLocaleDateString(); })()'
|
||||
]
|
||||
_current_key = None
|
||||
|
||||
# See https://www.mixcloud.com/media/js2/www_js_2.9e23256562c080482435196ca3975ab5.js
|
||||
@staticmethod
|
||||
def _decrypt_play_info(play_info):
|
||||
KEY = 'pleasedontdownloadourmusictheartistswontgetpaid'
|
||||
|
||||
def _decrypt_play_info(self, play_info, video_id):
|
||||
play_info = base64.b64decode(play_info.encode('ascii'))
|
||||
|
||||
return ''.join([
|
||||
compat_chr(compat_ord(ch) ^ compat_ord(KEY[idx % len(KEY)]))
|
||||
for idx, ch in enumerate(play_info)])
|
||||
for num, key in enumerate(self._keys, start=1):
|
||||
try:
|
||||
return self._parse_json(
|
||||
''.join([
|
||||
compat_chr(compat_ord(ch) ^ compat_ord(key[idx % len(key)]))
|
||||
for idx, ch in enumerate(play_info)]),
|
||||
video_id)
|
||||
except ExtractorError:
|
||||
if num == len(self._keys):
|
||||
raise
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
@ -72,14 +84,30 @@ class MixcloudIE(InfoExtractor):
|
||||
|
||||
webpage = self._download_webpage(url, track_id)
|
||||
|
||||
if not self._current_key:
|
||||
js_url = self._search_regex(
|
||||
r'<script[^>]+\bsrc=["\"](https://(?:www\.)?mixcloud\.com/media/js2/www_js_4\.[^>]+\.js)',
|
||||
webpage, 'js url', default=None)
|
||||
if js_url:
|
||||
js = self._download_webpage(js_url, track_id, fatal=False)
|
||||
if js:
|
||||
KEY_RE_TEMPLATE = r'player\s*:\s*{.*?\b%s\s*:\s*(["\'])(?P<key>(?:(?!\1).)+)\1'
|
||||
for key_name in ('value', 'key_value'):
|
||||
key = self._search_regex(
|
||||
KEY_RE_TEMPLATE % key_name, js, 'key',
|
||||
default=None, group='key')
|
||||
if key and isinstance(key, compat_str):
|
||||
self._keys.insert(0, key)
|
||||
self._current_key = key
|
||||
|
||||
message = self._html_search_regex(
|
||||
r'(?s)<div[^>]+class="global-message cloudcast-disabled-notice-light"[^>]*>(.+?)<(?:a|/div)',
|
||||
webpage, 'error message', default=None)
|
||||
|
||||
encrypted_play_info = self._search_regex(
|
||||
r'm-play-info="([^"]+)"', webpage, 'play info')
|
||||
play_info = self._parse_json(
|
||||
self._decrypt_play_info(encrypted_play_info), track_id)
|
||||
|
||||
play_info = self._decrypt_play_info(encrypted_play_info, track_id)
|
||||
|
||||
if message and 'stream_url' not in play_info:
|
||||
raise ExtractorError('%s said: %s' % (self.IE_NAME, message), expected=True)
|
||||
|
@ -15,7 +15,7 @@ class MLBIE(InfoExtractor):
|
||||
(?:[\da-z_-]+\.)*mlb\.com/
|
||||
(?:
|
||||
(?:
|
||||
(?:.*?/)?video/(?:topic/[\da-z_-]+/)?v|
|
||||
(?:.*?/)?video/(?:topic/[\da-z_-]+/)?(?:v|.*?/c-)|
|
||||
(?:
|
||||
shared/video/embed/(?:embed|m-internal-embed)\.html|
|
||||
(?:[^/]+/)+(?:play|index)\.jsp|
|
||||
@ -84,7 +84,7 @@ class MLBIE(InfoExtractor):
|
||||
},
|
||||
{
|
||||
'url': 'http://m.mlb.com/news/article/118550098/blue-jays-kevin-pillar-goes-spidey-up-the-wall-to-rob-tim-beckham-of-a-homer',
|
||||
'md5': 'b190e70141fb9a1552a85426b4da1b5d',
|
||||
'md5': 'aafaf5b0186fee8f32f20508092f8111',
|
||||
'info_dict': {
|
||||
'id': '75609783',
|
||||
'ext': 'mp4',
|
||||
@ -94,6 +94,10 @@ class MLBIE(InfoExtractor):
|
||||
'upload_date': '20150415',
|
||||
}
|
||||
},
|
||||
{
|
||||
'url': 'https://www.mlb.com/video/hargrove-homers-off-caldwell/c-1352023483?tid=67793694',
|
||||
'only_matching': True,
|
||||
},
|
||||
{
|
||||
'url': 'http://m.mlb.com/shared/video/embed/embed.html?content_id=35692085&topic_id=6479266&width=400&height=224&property=mlb',
|
||||
'only_matching': True,
|
||||
|
@ -1,62 +0,0 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import int_or_none
|
||||
|
||||
|
||||
class MporaIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?mpora\.(?:com|de)/videos/(?P<id>[^?#/]+)'
|
||||
IE_NAME = 'MPORA'
|
||||
|
||||
_TEST = {
|
||||
'url': 'http://mpora.de/videos/AAdo8okx4wiz/embed?locale=de',
|
||||
'md5': 'a7a228473eedd3be741397cf452932eb',
|
||||
'info_dict': {
|
||||
'id': 'AAdo8okx4wiz',
|
||||
'ext': 'mp4',
|
||||
'title': 'Katy Curd - Winter in the Forest',
|
||||
'duration': 416,
|
||||
'uploader': 'Peter Newman Media',
|
||||
},
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
data_json = self._search_regex(
|
||||
[r"new FM\.Player\('[^']+',\s*(\{.*?)\).player;",
|
||||
r"new\s+FM\.Kaltura\.Player\('[^']+'\s*,\s*({.+?})\);"],
|
||||
webpage, 'json')
|
||||
data = self._parse_json(data_json, video_id)
|
||||
|
||||
uploader = data['info_overlay'].get('username')
|
||||
duration = data['video']['duration'] // 1000
|
||||
thumbnail = data['video']['encodings']['sd']['poster']
|
||||
title = data['info_overlay']['title']
|
||||
|
||||
formats = []
|
||||
for encoding_id, edata in data['video']['encodings'].items():
|
||||
for src in edata['sources']:
|
||||
width_str = self._search_regex(
|
||||
r'_([0-9]+)\.[a-zA-Z0-9]+$', src['src'],
|
||||
False, default=None)
|
||||
vcodec = src['type'].partition('/')[2]
|
||||
|
||||
formats.append({
|
||||
'format_id': encoding_id + '-' + vcodec,
|
||||
'url': src['src'],
|
||||
'vcodec': vcodec,
|
||||
'width': int_or_none(width_str),
|
||||
})
|
||||
|
||||
self._sort_formats(formats)
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'formats': formats,
|
||||
'uploader': uploader,
|
||||
'duration': duration,
|
||||
'thumbnail': thumbnail,
|
||||
}
|
@ -50,8 +50,7 @@ class MTVServicesInfoExtractor(InfoExtractor):
|
||||
thumb_node = itemdoc.find(search_path)
|
||||
if thumb_node is None:
|
||||
return None
|
||||
else:
|
||||
return thumb_node.attrib['url']
|
||||
return thumb_node.get('url') or thumb_node.text or None
|
||||
|
||||
def _extract_mobile_video_formats(self, mtvn_id):
|
||||
webpage_url = self._MOBILE_TEMPLATE % mtvn_id
|
||||
@ -83,7 +82,7 @@ class MTVServicesInfoExtractor(InfoExtractor):
|
||||
hls_url = rendition.find('./src').text
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
hls_url, video_id, ext='mp4', entry_protocol='m3u8_native',
|
||||
m3u8_id='hls'))
|
||||
m3u8_id='hls', fatal=False))
|
||||
else:
|
||||
# fms
|
||||
try:
|
||||
@ -106,7 +105,8 @@ class MTVServicesInfoExtractor(InfoExtractor):
|
||||
}])
|
||||
except (KeyError, TypeError):
|
||||
raise ExtractorError('Invalid rendition field.')
|
||||
self._sort_formats(formats)
|
||||
if formats:
|
||||
self._sort_formats(formats)
|
||||
return formats
|
||||
|
||||
def _extract_subtitles(self, mdoc, mtvn_id):
|
||||
@ -133,8 +133,11 @@ class MTVServicesInfoExtractor(InfoExtractor):
|
||||
mediagen_url += 'acceptMethods='
|
||||
mediagen_url += 'hls' if use_hls else 'fms'
|
||||
|
||||
mediagen_doc = self._download_xml(mediagen_url, video_id,
|
||||
'Downloading video urls')
|
||||
mediagen_doc = self._download_xml(
|
||||
mediagen_url, video_id, 'Downloading video urls', fatal=False)
|
||||
|
||||
if mediagen_doc is False:
|
||||
return None
|
||||
|
||||
item = mediagen_doc.find('./video/item')
|
||||
if item is not None and item.get('type') == 'text':
|
||||
@ -174,6 +177,13 @@ class MTVServicesInfoExtractor(InfoExtractor):
|
||||
|
||||
formats = self._extract_video_formats(mediagen_doc, mtvn_id, video_id)
|
||||
|
||||
# Some parts of complete video may be missing (e.g. missing Act 3 in
|
||||
# http://www.southpark.de/alle-episoden/s14e01-sexual-healing)
|
||||
if not formats:
|
||||
return None
|
||||
|
||||
self._sort_formats(formats)
|
||||
|
||||
return {
|
||||
'title': title,
|
||||
'formats': formats,
|
||||
@ -205,9 +215,14 @@ class MTVServicesInfoExtractor(InfoExtractor):
|
||||
title = xpath_text(idoc, './channel/title')
|
||||
description = xpath_text(idoc, './channel/description')
|
||||
|
||||
entries = []
|
||||
for item in idoc.findall('.//item'):
|
||||
info = self._get_video_info(item, use_hls)
|
||||
if info:
|
||||
entries.append(info)
|
||||
|
||||
return self.playlist_result(
|
||||
[self._get_video_info(item, use_hls) for item in idoc.findall('.//item')],
|
||||
playlist_title=title, playlist_description=description)
|
||||
entries, playlist_title=title, playlist_description=description)
|
||||
|
||||
def _extract_triforce_mgid(self, webpage, data_zone=None, video_id=None):
|
||||
triforce_feed = self._parse_json(self._search_regex(
|
||||
|
271
youtube_dl/extractor/nexx.py
Normal file
271
youtube_dl/extractor/nexx.py
Normal file
@ -0,0 +1,271 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import hashlib
|
||||
import random
|
||||
import re
|
||||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
parse_duration,
|
||||
try_get,
|
||||
urlencode_postdata,
|
||||
)
|
||||
|
||||
|
||||
class NexxIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://api\.nexx(?:\.cloud|cdn\.com)/v3/(?P<domain_id>\d+)/videos/byid/(?P<id>\d+)'
|
||||
_TESTS = [{
|
||||
# movie
|
||||
'url': 'https://api.nexx.cloud/v3/748/videos/byid/128907',
|
||||
'md5': '16746bfc28c42049492385c989b26c4a',
|
||||
'info_dict': {
|
||||
'id': '128907',
|
||||
'ext': 'mp4',
|
||||
'title': 'Stiftung Warentest',
|
||||
'alt_title': 'Wie ein Test abläuft',
|
||||
'description': 'md5:d1ddb1ef63de721132abd38639cc2fd2',
|
||||
'release_year': 2013,
|
||||
'creator': 'SPIEGEL TV',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 2509,
|
||||
'timestamp': 1384264416,
|
||||
'upload_date': '20131112',
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
},
|
||||
}, {
|
||||
# episode
|
||||
'url': 'https://api.nexx.cloud/v3/741/videos/byid/247858',
|
||||
'info_dict': {
|
||||
'id': '247858',
|
||||
'ext': 'mp4',
|
||||
'title': 'Return of the Golden Child (OV)',
|
||||
'description': 'md5:5d969537509a92b733de21bae249dc63',
|
||||
'release_year': 2017,
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 1397,
|
||||
'timestamp': 1495033267,
|
||||
'upload_date': '20170517',
|
||||
'episode_number': 2,
|
||||
'season_number': 2,
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://api.nexxcdn.com/v3/748/videos/byid/128907',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
def _extract_urls(webpage):
|
||||
# Reference:
|
||||
# 1. https://nx-s.akamaized.net/files/201510/44.pdf
|
||||
|
||||
entries = []
|
||||
|
||||
# JavaScript Integration
|
||||
mobj = re.search(
|
||||
r'<script\b[^>]+\bsrc=["\']https?://require\.nexx(?:\.cloud|cdn\.com)/(?P<id>\d+)',
|
||||
webpage)
|
||||
if mobj:
|
||||
domain_id = mobj.group('id')
|
||||
for video_id in re.findall(
|
||||
r'(?is)onPLAYReady.+?_play\.init\s*\(.+?\s*,\s*["\']?(\d+)',
|
||||
webpage):
|
||||
entries.append(
|
||||
'https://api.nexx.cloud/v3/%s/videos/byid/%s'
|
||||
% (domain_id, video_id))
|
||||
|
||||
# TODO: support more embed formats
|
||||
|
||||
return entries
|
||||
|
||||
@staticmethod
|
||||
def _extract_url(webpage):
|
||||
return NexxIE._extract_urls(webpage)[0]
|
||||
|
||||
def _handle_error(self, response):
|
||||
status = int_or_none(try_get(
|
||||
response, lambda x: x['metadata']['status']) or 200)
|
||||
if 200 <= status < 300:
|
||||
return
|
||||
raise ExtractorError(
|
||||
'%s said: %s' % (self.IE_NAME, response['metadata']['errorhint']),
|
||||
expected=True)
|
||||
|
||||
def _call_api(self, domain_id, path, video_id, data=None, headers={}):
|
||||
headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'
|
||||
result = self._download_json(
|
||||
'https://api.nexx.cloud/v3/%s/%s' % (domain_id, path), video_id,
|
||||
'Downloading %s JSON' % path, data=urlencode_postdata(data),
|
||||
headers=headers)
|
||||
self._handle_error(result)
|
||||
return result['result']
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
domain_id, video_id = mobj.group('domain_id', 'id')
|
||||
|
||||
# Reverse engineered from JS code (see getDeviceID function)
|
||||
device_id = '%d:%d:%d%d' % (
|
||||
random.randint(1, 4), int(time.time()),
|
||||
random.randint(1e4, 99999), random.randint(1, 9))
|
||||
|
||||
result = self._call_api(domain_id, 'session/init', video_id, data={
|
||||
'nxp_devh': device_id,
|
||||
'nxp_userh': '',
|
||||
'precid': '0',
|
||||
'playlicense': '0',
|
||||
'screenx': '1920',
|
||||
'screeny': '1080',
|
||||
'playerversion': '6.0.00',
|
||||
'gateway': 'html5',
|
||||
'adGateway': '',
|
||||
'explicitlanguage': 'en-US',
|
||||
'addTextTemplates': '1',
|
||||
'addDomainData': '1',
|
||||
'addAdModel': '1',
|
||||
}, headers={
|
||||
'X-Request-Enable-Auth-Fallback': '1',
|
||||
})
|
||||
|
||||
cid = result['general']['cid']
|
||||
|
||||
# As described in [1] X-Request-Token generation algorithm is
|
||||
# as follows:
|
||||
# md5( operation + domain_id + domain_secret )
|
||||
# where domain_secret is a static value that will be given by nexx.tv
|
||||
# as per [1]. Here is how this "secret" is generated (reversed
|
||||
# from _play.api.init function, search for clienttoken). So it's
|
||||
# actually not static and not that much of a secret.
|
||||
# 1. https://nexxtvstorage.blob.core.windows.net/files/201610/27.pdf
|
||||
secret = result['device']['clienttoken'][int(device_id[0]):]
|
||||
secret = secret[0:len(secret) - int(device_id[-1])]
|
||||
|
||||
op = 'byid'
|
||||
|
||||
# Reversed from JS code for _play.api.call function (search for
|
||||
# X-Request-Token)
|
||||
request_token = hashlib.md5(
|
||||
''.join((op, domain_id, secret)).encode('utf-8')).hexdigest()
|
||||
|
||||
video = self._call_api(
|
||||
domain_id, 'videos/%s/%s' % (op, video_id), video_id, data={
|
||||
'additionalfields': 'language,channel,actors,studio,licenseby,slug,subtitle,teaser,description',
|
||||
'addInteractionOptions': '1',
|
||||
'addStatusDetails': '1',
|
||||
'addStreamDetails': '1',
|
||||
'addCaptions': '1',
|
||||
'addScenes': '1',
|
||||
'addHotSpots': '1',
|
||||
'addBumpers': '1',
|
||||
'captionFormat': 'data',
|
||||
}, headers={
|
||||
'X-Request-CID': cid,
|
||||
'X-Request-Token': request_token,
|
||||
})
|
||||
|
||||
general = video['general']
|
||||
title = general['title']
|
||||
|
||||
stream_data = video['streamdata']
|
||||
language = general.get('language_raw') or ''
|
||||
|
||||
# TODO: reverse more cdns and formats
|
||||
|
||||
cdn = stream_data['cdnType']
|
||||
assert cdn == 'azure'
|
||||
|
||||
azure_locator = stream_data['azureLocator']
|
||||
|
||||
AZURE_URL = 'http://nx-p%02d.akamaized.net/'
|
||||
|
||||
for secure in ('s', ''):
|
||||
cdn_shield = stream_data.get('cdnShieldHTTP%s' % secure.upper())
|
||||
if cdn_shield:
|
||||
azure_base = 'http%s://%s' % (secure, cdn_shield)
|
||||
break
|
||||
else:
|
||||
azure_base = AZURE_URL % int(stream_data['azureAccount'].replace('nexxplayplus', ''))
|
||||
|
||||
is_ml = ',' in language
|
||||
azure_m3u8_url = '%s%s/%s_src%s.ism/Manifest(format=m3u8-aapl)' % (
|
||||
azure_base, azure_locator, video_id, ('_manifest' if is_ml else ''))
|
||||
|
||||
protection_token = try_get(
|
||||
video, lambda x: x['protectiondata']['token'], compat_str)
|
||||
if protection_token:
|
||||
azure_m3u8_url += '?hdnts=%s' % protection_token
|
||||
|
||||
formats = self._extract_m3u8_formats(
|
||||
azure_m3u8_url, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||
m3u8_id='%s-hls' % cdn)
|
||||
self._sort_formats(formats)
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'alt_title': general.get('subtitle'),
|
||||
'description': general.get('description'),
|
||||
'release_year': int_or_none(general.get('year')),
|
||||
'creator': general.get('studio') or general.get('studio_adref'),
|
||||
'thumbnail': try_get(
|
||||
video, lambda x: x['imagedata']['thumb'], compat_str),
|
||||
'duration': parse_duration(general.get('runtime')),
|
||||
'timestamp': int_or_none(general.get('uploaded')),
|
||||
'episode_number': int_or_none(try_get(
|
||||
video, lambda x: x['episodedata']['episode'])),
|
||||
'season_number': int_or_none(try_get(
|
||||
video, lambda x: x['episodedata']['season'])),
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
|
||||
class NexxEmbedIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://embed\.nexx(?:\.cloud|cdn\.com)/\d+/(?P<id>[^/?#&]+)'
|
||||
_TEST = {
|
||||
'url': 'http://embed.nexx.cloud/748/KC1614647Z27Y7T?autoplay=1',
|
||||
'md5': '16746bfc28c42049492385c989b26c4a',
|
||||
'info_dict': {
|
||||
'id': '161464',
|
||||
'ext': 'mp4',
|
||||
'title': 'Nervenkitzel Achterbahn',
|
||||
'alt_title': 'Karussellbauer in Deutschland',
|
||||
'description': 'md5:ffe7b1cc59a01f585e0569949aef73cc',
|
||||
'release_year': 2005,
|
||||
'creator': 'SPIEGEL TV',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 2761,
|
||||
'timestamp': 1394021479,
|
||||
'upload_date': '20140305',
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
'skip_download': True,
|
||||
},
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _extract_urls(webpage):
|
||||
# Reference:
|
||||
# 1. https://nx-s.akamaized.net/files/201510/44.pdf
|
||||
|
||||
# iFrame Embed Integration
|
||||
return [mobj.group('url') for mobj in re.finditer(
|
||||
r'<iframe[^>]+\bsrc=(["\'])(?P<url>(?:https?:)?//embed\.nexx(?:\.cloud|cdn\.com)/\d+/(?:(?!\1).)+)\1',
|
||||
webpage)]
|
||||
|
||||
def _real_extract(self, url):
|
||||
embed_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, embed_id)
|
||||
|
||||
return self.url_result(NexxIE._extract_url(webpage), ie=NexxIE.ie_key())
|
@ -12,6 +12,7 @@ class NickIE(MTVServicesInfoExtractor):
|
||||
IE_NAME = 'nick.com'
|
||||
_VALID_URL = r'https?://(?:(?:www|beta)\.)?nick(?:jr)?\.com/(?:[^/]+/)?(?:videos/clip|[^/]+/videos)/(?P<id>[^/?#.]+)'
|
||||
_FEED_URL = 'http://udat.mtvnservices.com/service1/dispatch.htm'
|
||||
_GEO_COUNTRIES = ['US']
|
||||
_TESTS = [{
|
||||
'url': 'http://www.nick.com/videos/clip/alvinnn-and-the-chipmunks-112-full-episode.html',
|
||||
'playlist': [
|
||||
@ -74,7 +75,7 @@ class NickIE(MTVServicesInfoExtractor):
|
||||
|
||||
class NickDeIE(MTVServicesInfoExtractor):
|
||||
IE_NAME = 'nick.de'
|
||||
_VALID_URL = r'https?://(?:www\.)?(?P<host>nick\.de|nickelodeon\.(?:nl|at))/(?:playlist|shows)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
||||
_VALID_URL = r'https?://(?:www\.)?(?P<host>nick\.(?:de|com\.pl)|nickelodeon\.(?:nl|at))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.nick.de/playlist/3773-top-videos/videos/episode/17306-zu-wasser-und-zu-land-rauchende-erdnusse',
|
||||
'only_matching': True,
|
||||
@ -87,6 +88,9 @@ class NickDeIE(MTVServicesInfoExtractor):
|
||||
}, {
|
||||
'url': 'http://www.nickelodeon.at/playlist/3773-top-videos/videos/episode/77993-das-letzte-gefecht',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.nick.com.pl/seriale/474-spongebob-kanciastoporty/wideo/17412-teatr-to-jest-to-rodeo-oszolom',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _extract_mrss_url(self, webpage, host):
|
||||
@ -124,3 +128,21 @@ class NickNightIE(NickDeIE):
|
||||
return self._search_regex(
|
||||
r'mrss\s*:\s*(["\'])(?P<url>http.+?)\1', webpage,
|
||||
'mrss url', group='url')
|
||||
|
||||
|
||||
class NickRuIE(MTVServicesInfoExtractor):
|
||||
IE_NAME = 'nickelodeonru'
|
||||
_VALID_URL = r'https?://(?:www\.)nickelodeon\.ru/(?:playlist|shows|videos)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.nickelodeon.ru/shows/henrydanger/videos/episodes/3-sezon-15-seriya-licenziya-na-polyot/pmomfb#playlist/7airc6',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.nickelodeon.ru/videos/smotri-na-nickelodeon-v-iyule/g9hvh7',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
mgid = self._extract_mgid(webpage)
|
||||
return self.url_result('http://media.mtvnservices.com/embed/%s' % mgid)
|
||||
|
@ -1,23 +1,27 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import re
|
||||
import json
|
||||
import datetime
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import (
|
||||
compat_parse_qs,
|
||||
compat_urlparse,
|
||||
)
|
||||
from ..utils import (
|
||||
determine_ext,
|
||||
dict_get,
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
float_or_none,
|
||||
parse_duration,
|
||||
parse_iso8601,
|
||||
sanitized_Request,
|
||||
xpath_text,
|
||||
determine_ext,
|
||||
remove_start,
|
||||
try_get,
|
||||
unified_timestamp,
|
||||
urlencode_postdata,
|
||||
xpath_text,
|
||||
)
|
||||
|
||||
|
||||
@ -32,12 +36,15 @@ class NiconicoIE(InfoExtractor):
|
||||
'id': 'sm22312215',
|
||||
'ext': 'mp4',
|
||||
'title': 'Big Buck Bunny',
|
||||
'thumbnail': r're:https?://.*',
|
||||
'uploader': 'takuya0301',
|
||||
'uploader_id': '2698420',
|
||||
'upload_date': '20131123',
|
||||
'timestamp': 1385182762,
|
||||
'description': '(c) copyright 2008, Blender Foundation / www.bigbuckbunny.org',
|
||||
'duration': 33,
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
},
|
||||
'skip': 'Requires an account',
|
||||
}, {
|
||||
@ -49,6 +56,7 @@ class NiconicoIE(InfoExtractor):
|
||||
'ext': 'swf',
|
||||
'title': '【鏡音リン】Dance on media【オリジナル】take2!',
|
||||
'description': 'md5:689f066d74610b3b22e0f1739add0f58',
|
||||
'thumbnail': r're:https?://.*',
|
||||
'uploader': 'りょうた',
|
||||
'uploader_id': '18822557',
|
||||
'upload_date': '20110429',
|
||||
@ -65,9 +73,11 @@ class NiconicoIE(InfoExtractor):
|
||||
'ext': 'unknown_video',
|
||||
'description': 'deleted',
|
||||
'title': 'ドラえもんエターナル第3話「決戦第3新東京市」<前編>',
|
||||
'thumbnail': r're:https?://.*',
|
||||
'upload_date': '20071224',
|
||||
'timestamp': int, # timestamp field has different value if logged in
|
||||
'duration': 304,
|
||||
'view_count': int,
|
||||
},
|
||||
'skip': 'Requires an account',
|
||||
}, {
|
||||
@ -77,12 +87,51 @@ class NiconicoIE(InfoExtractor):
|
||||
'ext': 'mp4',
|
||||
'title': '【第1回】RADIOアニメロミックス ラブライブ!~のぞえりRadio Garden~',
|
||||
'description': 'md5:b27d224bb0ff53d3c8269e9f8b561cf1',
|
||||
'thumbnail': r're:https?://.*',
|
||||
'timestamp': 1388851200,
|
||||
'upload_date': '20140104',
|
||||
'uploader': 'アニメロチャンネル',
|
||||
'uploader_id': '312',
|
||||
},
|
||||
'skip': 'The viewing period of the video you were searching for has expired.',
|
||||
}, {
|
||||
# video not available via `getflv`; "old" HTML5 video
|
||||
'url': 'http://www.nicovideo.jp/watch/sm1151009',
|
||||
'md5': '8fa81c364eb619d4085354eab075598a',
|
||||
'info_dict': {
|
||||
'id': 'sm1151009',
|
||||
'ext': 'mp4',
|
||||
'title': 'マスターシステム本体内蔵のスペハリのメインテーマ(PSG版)',
|
||||
'description': 'md5:6ee077e0581ff5019773e2e714cdd0b7',
|
||||
'thumbnail': r're:https?://.*',
|
||||
'duration': 184,
|
||||
'timestamp': 1190868283,
|
||||
'upload_date': '20070927',
|
||||
'uploader': 'denden2',
|
||||
'uploader_id': '1392194',
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
},
|
||||
'skip': 'Requires an account',
|
||||
}, {
|
||||
# "New" HTML5 video
|
||||
'url': 'http://www.nicovideo.jp/watch/sm31464864',
|
||||
'md5': '351647b4917660986dc0fa8864085135',
|
||||
'info_dict': {
|
||||
'id': 'sm31464864',
|
||||
'ext': 'mp4',
|
||||
'title': '新作TVアニメ「戦姫絶唱シンフォギアAXZ」PV 最高画質',
|
||||
'description': 'md5:e52974af9a96e739196b2c1ca72b5feb',
|
||||
'timestamp': 1498514060,
|
||||
'upload_date': '20170626',
|
||||
'uploader': 'ゲス',
|
||||
'uploader_id': '40826363',
|
||||
'thumbnail': r're:https?://.*',
|
||||
'duration': 198,
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
},
|
||||
'skip': 'Requires an account',
|
||||
}, {
|
||||
'url': 'http://sp.nicovideo.jp/watch/sm28964488?ss_pos=1&cp_in=wt_tg',
|
||||
'only_matching': True,
|
||||
@ -101,19 +150,102 @@ class NiconicoIE(InfoExtractor):
|
||||
return True
|
||||
|
||||
# Log in
|
||||
login_ok = True
|
||||
login_form_strs = {
|
||||
'mail': username,
|
||||
'mail_tel': username,
|
||||
'password': password,
|
||||
}
|
||||
login_data = urlencode_postdata(login_form_strs)
|
||||
request = sanitized_Request(
|
||||
'https://secure.nicovideo.jp/secure/login', login_data)
|
||||
login_results = self._download_webpage(
|
||||
request, None, note='Logging in', errnote='Unable to log in')
|
||||
if re.search(r'(?i)<h1 class="mb8p4">Log in error</h1>', login_results) is not None:
|
||||
urlh = self._request_webpage(
|
||||
'https://account.nicovideo.jp/api/v1/login', None,
|
||||
note='Logging in', errnote='Unable to log in',
|
||||
data=urlencode_postdata(login_form_strs))
|
||||
if urlh is False:
|
||||
login_ok = False
|
||||
else:
|
||||
parts = compat_urlparse.urlparse(urlh.geturl())
|
||||
if compat_parse_qs(parts.query).get('message', [None])[0] == 'cant_login':
|
||||
login_ok = False
|
||||
if not login_ok:
|
||||
self._downloader.report_warning('unable to log in: bad username or password')
|
||||
return False
|
||||
return True
|
||||
return login_ok
|
||||
|
||||
def _extract_format_for_quality(self, api_data, video_id, audio_quality, video_quality):
|
||||
def yesno(boolean):
|
||||
return 'yes' if boolean else 'no'
|
||||
|
||||
session_api_data = api_data['video']['dmcInfo']['session_api']
|
||||
session_api_endpoint = session_api_data['urls'][0]
|
||||
|
||||
format_id = '-'.join(map(lambda s: remove_start(s['id'], 'archive_'), [video_quality, audio_quality]))
|
||||
|
||||
session_response = self._download_json(
|
||||
session_api_endpoint['url'], video_id,
|
||||
query={'_format': 'json'},
|
||||
headers={'Content-Type': 'application/json'},
|
||||
note='Downloading JSON metadata for %s' % format_id,
|
||||
data=json.dumps({
|
||||
'session': {
|
||||
'client_info': {
|
||||
'player_id': session_api_data['player_id'],
|
||||
},
|
||||
'content_auth': {
|
||||
'auth_type': session_api_data['auth_types'][session_api_data['protocols'][0]],
|
||||
'content_key_timeout': session_api_data['content_key_timeout'],
|
||||
'service_id': 'nicovideo',
|
||||
'service_user_id': session_api_data['service_user_id']
|
||||
},
|
||||
'content_id': session_api_data['content_id'],
|
||||
'content_src_id_sets': [{
|
||||
'content_src_ids': [{
|
||||
'src_id_to_mux': {
|
||||
'audio_src_ids': [audio_quality['id']],
|
||||
'video_src_ids': [video_quality['id']],
|
||||
}
|
||||
}]
|
||||
}],
|
||||
'content_type': 'movie',
|
||||
'content_uri': '',
|
||||
'keep_method': {
|
||||
'heartbeat': {
|
||||
'lifetime': session_api_data['heartbeat_lifetime']
|
||||
}
|
||||
},
|
||||
'priority': session_api_data['priority'],
|
||||
'protocol': {
|
||||
'name': 'http',
|
||||
'parameters': {
|
||||
'http_parameters': {
|
||||
'parameters': {
|
||||
'http_output_download_parameters': {
|
||||
'use_ssl': yesno(session_api_endpoint['is_ssl']),
|
||||
'use_well_known_port': yesno(session_api_endpoint['is_well_known_port']),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
'recipe_id': session_api_data['recipe_id'],
|
||||
'session_operation_auth': {
|
||||
'session_operation_auth_by_signature': {
|
||||
'signature': session_api_data['signature'],
|
||||
'token': session_api_data['token'],
|
||||
}
|
||||
},
|
||||
'timing_constraint': 'unlimited'
|
||||
}
|
||||
}))
|
||||
|
||||
resolution = video_quality.get('resolution', {})
|
||||
|
||||
return {
|
||||
'url': session_response['data']['session']['content_uri'],
|
||||
'format_id': format_id,
|
||||
'ext': 'mp4', # Session API are used in HTML5, which always serves mp4
|
||||
'abr': float_or_none(audio_quality.get('bitrate'), 1000),
|
||||
'vbr': float_or_none(video_quality.get('bitrate'), 1000),
|
||||
'height': resolution.get('height'),
|
||||
'width': resolution.get('width'),
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
@ -126,30 +258,84 @@ class NiconicoIE(InfoExtractor):
|
||||
if video_id.startswith('so'):
|
||||
video_id = self._match_id(handle.geturl())
|
||||
|
||||
video_info = self._download_xml(
|
||||
'http://ext.nicovideo.jp/api/getthumbinfo/' + video_id, video_id,
|
||||
note='Downloading video info page')
|
||||
api_data = self._parse_json(self._html_search_regex(
|
||||
'data-api-data="([^"]+)"', webpage,
|
||||
'API data', default='{}'), video_id)
|
||||
|
||||
# Get flv info
|
||||
flv_info_webpage = self._download_webpage(
|
||||
'http://flapi.nicovideo.jp/api/getflv/' + video_id + '?as3=1',
|
||||
video_id, 'Downloading flv info')
|
||||
def _format_id_from_url(video_url):
|
||||
return 'economy' if video_real_url.endswith('low') else 'normal'
|
||||
|
||||
flv_info = compat_urlparse.parse_qs(flv_info_webpage)
|
||||
if 'url' not in flv_info:
|
||||
if 'deleted' in flv_info:
|
||||
raise ExtractorError('The video has been deleted.',
|
||||
expected=True)
|
||||
elif 'closed' in flv_info:
|
||||
raise ExtractorError('Niconico videos now require logging in',
|
||||
expected=True)
|
||||
else:
|
||||
raise ExtractorError('Unable to find video URL')
|
||||
try:
|
||||
video_real_url = api_data['video']['smileInfo']['url']
|
||||
except KeyError: # Flash videos
|
||||
# Get flv info
|
||||
flv_info_webpage = self._download_webpage(
|
||||
'http://flapi.nicovideo.jp/api/getflv/' + video_id + '?as3=1',
|
||||
video_id, 'Downloading flv info')
|
||||
|
||||
video_real_url = flv_info['url'][0]
|
||||
flv_info = compat_urlparse.parse_qs(flv_info_webpage)
|
||||
if 'url' not in flv_info:
|
||||
if 'deleted' in flv_info:
|
||||
raise ExtractorError('The video has been deleted.',
|
||||
expected=True)
|
||||
elif 'closed' in flv_info:
|
||||
raise ExtractorError('Niconico videos now require logging in',
|
||||
expected=True)
|
||||
elif 'error' in flv_info:
|
||||
raise ExtractorError('%s reports error: %s' % (
|
||||
self.IE_NAME, flv_info['error'][0]), expected=True)
|
||||
else:
|
||||
raise ExtractorError('Unable to find video URL')
|
||||
|
||||
video_info_xml = self._download_xml(
|
||||
'http://ext.nicovideo.jp/api/getthumbinfo/' + video_id,
|
||||
video_id, note='Downloading video info page')
|
||||
|
||||
def get_video_info(items):
|
||||
if not isinstance(items, list):
|
||||
items = [items]
|
||||
for item in items:
|
||||
ret = xpath_text(video_info_xml, './/' + item)
|
||||
if ret:
|
||||
return ret
|
||||
|
||||
video_real_url = flv_info['url'][0]
|
||||
|
||||
extension = get_video_info('movie_type')
|
||||
if not extension:
|
||||
extension = determine_ext(video_real_url)
|
||||
|
||||
formats = [{
|
||||
'url': video_real_url,
|
||||
'ext': extension,
|
||||
'format_id': _format_id_from_url(video_real_url),
|
||||
}]
|
||||
else:
|
||||
formats = []
|
||||
|
||||
dmc_info = api_data['video'].get('dmcInfo')
|
||||
if dmc_info: # "New" HTML5 videos
|
||||
quality_info = dmc_info['quality']
|
||||
for audio_quality in quality_info['audios']:
|
||||
for video_quality in quality_info['videos']:
|
||||
if not audio_quality['available'] or not video_quality['available']:
|
||||
continue
|
||||
formats.append(self._extract_format_for_quality(
|
||||
api_data, video_id, audio_quality, video_quality))
|
||||
|
||||
self._sort_formats(formats)
|
||||
else: # "Old" HTML5 videos
|
||||
formats = [{
|
||||
'url': video_real_url,
|
||||
'ext': 'mp4',
|
||||
'format_id': _format_id_from_url(video_real_url),
|
||||
}]
|
||||
|
||||
def get_video_info(items):
|
||||
return dict_get(api_data['video'], items)
|
||||
|
||||
# Start extracting information
|
||||
title = xpath_text(video_info, './/title')
|
||||
title = get_video_info('title')
|
||||
if not title:
|
||||
title = self._og_search_title(webpage, default=None)
|
||||
if not title:
|
||||
@ -163,18 +349,15 @@ class NiconicoIE(InfoExtractor):
|
||||
watch_api_data = self._parse_json(watch_api_data_string, video_id) if watch_api_data_string else {}
|
||||
video_detail = watch_api_data.get('videoDetail', {})
|
||||
|
||||
extension = xpath_text(video_info, './/movie_type')
|
||||
if not extension:
|
||||
extension = determine_ext(video_real_url)
|
||||
|
||||
thumbnail = (
|
||||
xpath_text(video_info, './/thumbnail_url') or
|
||||
get_video_info(['thumbnail_url', 'thumbnailURL']) or
|
||||
self._html_search_meta('image', webpage, 'thumbnail', default=None) or
|
||||
video_detail.get('thumbnail'))
|
||||
|
||||
description = xpath_text(video_info, './/description')
|
||||
description = get_video_info('description')
|
||||
|
||||
timestamp = parse_iso8601(xpath_text(video_info, './/first_retrieve'))
|
||||
timestamp = (parse_iso8601(get_video_info('first_retrieve')) or
|
||||
unified_timestamp(get_video_info('postedDateTime')))
|
||||
if not timestamp:
|
||||
match = self._html_search_meta('datePublished', webpage, 'date published', default=None)
|
||||
if match:
|
||||
@ -184,7 +367,7 @@ class NiconicoIE(InfoExtractor):
|
||||
video_detail['postedAt'].replace('/', '-'),
|
||||
delimiter=' ', timezone=datetime.timedelta(hours=9))
|
||||
|
||||
view_count = int_or_none(xpath_text(video_info, './/view_counter'))
|
||||
view_count = int_or_none(get_video_info(['view_counter', 'viewCount']))
|
||||
if not view_count:
|
||||
match = self._html_search_regex(
|
||||
r'>Views: <strong[^>]*>([^<]+)</strong>',
|
||||
@ -193,38 +376,33 @@ class NiconicoIE(InfoExtractor):
|
||||
view_count = int_or_none(match.replace(',', ''))
|
||||
view_count = view_count or video_detail.get('viewCount')
|
||||
|
||||
comment_count = int_or_none(xpath_text(video_info, './/comment_num'))
|
||||
comment_count = (int_or_none(get_video_info('comment_num')) or
|
||||
video_detail.get('commentCount') or
|
||||
try_get(api_data, lambda x: x['thread']['commentCount']))
|
||||
if not comment_count:
|
||||
match = self._html_search_regex(
|
||||
r'>Comments: <strong[^>]*>([^<]+)</strong>',
|
||||
webpage, 'comment count', default=None)
|
||||
if match:
|
||||
comment_count = int_or_none(match.replace(',', ''))
|
||||
comment_count = comment_count or video_detail.get('commentCount')
|
||||
|
||||
duration = (parse_duration(
|
||||
xpath_text(video_info, './/length') or
|
||||
get_video_info('length') or
|
||||
self._html_search_meta(
|
||||
'video:duration', webpage, 'video duration', default=None)) or
|
||||
video_detail.get('length'))
|
||||
video_detail.get('length') or
|
||||
get_video_info('duration'))
|
||||
|
||||
webpage_url = xpath_text(video_info, './/watch_url') or url
|
||||
webpage_url = get_video_info('watch_url') or url
|
||||
|
||||
if video_info.find('.//ch_id') is not None:
|
||||
uploader_id = video_info.find('.//ch_id').text
|
||||
uploader = video_info.find('.//ch_name').text
|
||||
elif video_info.find('.//user_id') is not None:
|
||||
uploader_id = video_info.find('.//user_id').text
|
||||
uploader = video_info.find('.//user_nickname').text
|
||||
else:
|
||||
uploader_id = uploader = None
|
||||
owner = api_data.get('owner', {})
|
||||
uploader_id = get_video_info(['ch_id', 'user_id']) or owner.get('id')
|
||||
uploader = get_video_info(['ch_name', 'user_nickname']) or owner.get('nickname')
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'url': video_real_url,
|
||||
'title': title,
|
||||
'ext': extension,
|
||||
'format_id': 'economy' if video_real_url.endswith('low') else 'normal',
|
||||
'formats': formats,
|
||||
'thumbnail': thumbnail,
|
||||
'description': description,
|
||||
'uploader': uploader,
|
||||
|
@ -28,7 +28,7 @@ class NPOBaseIE(InfoExtractor):
|
||||
|
||||
class NPOIE(NPOBaseIE):
|
||||
IE_NAME = 'npo'
|
||||
IE_DESC = 'npo.nl and ntr.nl'
|
||||
IE_DESC = 'npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl'
|
||||
_VALID_URL = r'''(?x)
|
||||
(?:
|
||||
npo:|
|
||||
@ -38,7 +38,7 @@ class NPOIE(NPOBaseIE):
|
||||
npo\.nl/(?!(?:live|radio)/)(?:[^/]+/){2}|
|
||||
ntr\.nl/(?:[^/]+/){2,}|
|
||||
omroepwnl\.nl/video/fragment/[^/]+__|
|
||||
zapp\.nl/[^/]+/[^/]+/
|
||||
(?:zapp|npo3)\.nl/(?:[^/]+/){2}
|
||||
)
|
||||
)
|
||||
(?P<id>[^/?#]+)
|
||||
@ -146,6 +146,9 @@ class NPOIE(NPOBaseIE):
|
||||
}, {
|
||||
'url': 'http://www.zapp.nl/beste-vrienden-quiz/extra-video-s/WO_NTR_1067990',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.npo3.nl/3onderzoekt/16-09-2015/VPWON_1239870',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# live stream
|
||||
'url': 'npo:LI_NL1_4188102',
|
||||
|
@ -237,7 +237,7 @@ class NRKTVIE(NRKBaseIE):
|
||||
(?:/\d{2}-\d{2}-\d{4})?
|
||||
(?:\#del=(?P<part_id>\d+))?
|
||||
''' % _EPISODE_RE
|
||||
_API_HOST = 'psapi-we.nrk.no'
|
||||
_API_HOST = 'psapi-ne.nrk.no'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://tv.nrk.no/serie/20-spoersmaal-tv/MUHH48000314/23-05-2014',
|
||||
|
@ -189,7 +189,7 @@ class PBSIE(InfoExtractor):
|
||||
# Direct video URL
|
||||
(?:%s)/(?:viralplayer|video)/(?P<id>[0-9]+)/? |
|
||||
# Article with embedded player (or direct video)
|
||||
(?:www\.)?pbs\.org/(?:[^/]+/){2,5}(?P<presumptive_id>[^/]+?)(?:\.html)?/?(?:$|[?\#]) |
|
||||
(?:www\.)?pbs\.org/(?:[^/]+/){1,5}(?P<presumptive_id>[^/]+?)(?:\.html)?/?(?:$|[?\#]) |
|
||||
# Player
|
||||
(?:video|player)\.pbs\.org/(?:widget/)?partnerplayer/(?P<player_id>[^/]+)/
|
||||
)
|
||||
@ -345,6 +345,21 @@ class PBSIE(InfoExtractor):
|
||||
'formats': 'mincount:8',
|
||||
},
|
||||
},
|
||||
{
|
||||
# https://github.com/rg3/youtube-dl/issues/13801
|
||||
'url': 'https://www.pbs.org/video/pbs-newshour-full-episode-july-31-2017-1501539057/',
|
||||
'info_dict': {
|
||||
'id': '3003333873',
|
||||
'ext': 'mp4',
|
||||
'title': 'PBS NewsHour - full episode July 31, 2017',
|
||||
'description': 'md5:d41d8cd98f00b204e9800998ecf8427e',
|
||||
'duration': 3265,
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
},
|
||||
{
|
||||
'url': 'http://player.pbs.org/widget/partnerplayer/2365297708/?start=0&end=0&chapterbar=false&endscreen=false&topbar=true',
|
||||
'only_matching': True,
|
||||
@ -433,6 +448,9 @@ class PBSIE(InfoExtractor):
|
||||
if url:
|
||||
break
|
||||
|
||||
if not url:
|
||||
url = self._og_search_url(webpage)
|
||||
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
|
||||
player_id = mobj.group('player_id')
|
||||
|
63
youtube_dl/extractor/pearvideo.py
Normal file
63
youtube_dl/extractor/pearvideo.py
Normal file
@ -0,0 +1,63 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
qualities,
|
||||
unified_timestamp,
|
||||
)
|
||||
|
||||
|
||||
class PearVideoIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?pearvideo\.com/video_(?P<id>\d+)'
|
||||
_TEST = {
|
||||
'url': 'http://www.pearvideo.com/video_1076290',
|
||||
'info_dict': {
|
||||
'id': '1076290',
|
||||
'ext': 'mp4',
|
||||
'title': '小浣熊在主人家玻璃上滚石头:没砸',
|
||||
'description': 'md5:01d576b747de71be0ee85eb7cac25f9d',
|
||||
'timestamp': 1494275280,
|
||||
'upload_date': '20170508',
|
||||
}
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
quality = qualities(
|
||||
('ldflv', 'ld', 'sdflv', 'sd', 'hdflv', 'hd', 'src'))
|
||||
|
||||
formats = [{
|
||||
'url': mobj.group('url'),
|
||||
'format_id': mobj.group('id'),
|
||||
'quality': quality(mobj.group('id')),
|
||||
} for mobj in re.finditer(
|
||||
r'(?P<id>[a-zA-Z]+)Url\s*=\s*(["\'])(?P<url>(?:https?:)?//.+?)\2',
|
||||
webpage)]
|
||||
self._sort_formats(formats)
|
||||
|
||||
title = self._search_regex(
|
||||
(r'<h1[^>]+\bclass=(["\'])video-tt\1[^>]*>(?P<value>[^<]+)',
|
||||
r'<[^>]+\bdata-title=(["\'])(?P<value>(?:(?!\1).)+)\1'),
|
||||
webpage, 'title', group='value')
|
||||
description = self._search_regex(
|
||||
(r'<div[^>]+\bclass=(["\'])summary\1[^>]*>(?P<value>[^<]+)',
|
||||
r'<[^>]+\bdata-summary=(["\'])(?P<value>(?:(?!\1).)+)\1'),
|
||||
webpage, 'description', default=None,
|
||||
group='value') or self._html_search_meta('Description', webpage)
|
||||
timestamp = unified_timestamp(self._search_regex(
|
||||
r'<div[^>]+\bclass=["\']date["\'][^>]*>([^<]+)',
|
||||
webpage, 'timestamp', fatal=False))
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'description': description,
|
||||
'timestamp': timestamp,
|
||||
'formats': formats,
|
||||
}
|
@ -49,7 +49,7 @@ class PeriscopeIE(PeriscopeBaseIE):
|
||||
@staticmethod
|
||||
def _extract_url(webpage):
|
||||
mobj = re.search(
|
||||
r'<iframe[^>]+src=([\'"])(?P<url>(?:https?:)?//(?:www\.)?periscope\.tv/(?:(?!\1).)+)\1', webpage)
|
||||
r'<iframe[^>]+src=([\'"])(?P<url>(?:https?:)?//(?:www\.)?(?:periscope|pscp)\.tv/(?:(?!\1).)+)\1', webpage)
|
||||
if mobj:
|
||||
return mobj.group('url')
|
||||
|
||||
@ -80,18 +80,24 @@ class PeriscopeIE(PeriscopeBaseIE):
|
||||
stream = self._call_api(
|
||||
'getAccessPublic', {'broadcast_id': token}, token)
|
||||
|
||||
video_urls = set()
|
||||
formats = []
|
||||
for format_id in ('replay', 'rtmp', 'hls', 'https_hls'):
|
||||
for format_id in ('replay', 'rtmp', 'hls', 'https_hls', 'lhls', 'lhlsweb'):
|
||||
video_url = stream.get(format_id + '_url')
|
||||
if not video_url:
|
||||
if not video_url or video_url in video_urls:
|
||||
continue
|
||||
f = {
|
||||
video_urls.add(video_url)
|
||||
if format_id != 'rtmp':
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
video_url, token, 'mp4',
|
||||
entry_protocol='m3u8_native'
|
||||
if state in ('ended', 'timed_out') else 'm3u8',
|
||||
m3u8_id=format_id, fatal=False))
|
||||
continue
|
||||
formats.append({
|
||||
'url': video_url,
|
||||
'ext': 'flv' if format_id == 'rtmp' else 'mp4',
|
||||
}
|
||||
if format_id != 'rtmp':
|
||||
f['protocol'] = 'm3u8_native' if state in ('ended', 'timed_out') else 'm3u8'
|
||||
formats.append(f)
|
||||
})
|
||||
self._sort_formats(formats)
|
||||
|
||||
return {
|
||||
|
@ -18,6 +18,7 @@ from ..utils import (
|
||||
parse_duration,
|
||||
qualities,
|
||||
srt_subtitles_timecode,
|
||||
try_get,
|
||||
update_url_query,
|
||||
urlencode_postdata,
|
||||
)
|
||||
@ -26,6 +27,39 @@ from ..utils import (
|
||||
class PluralsightBaseIE(InfoExtractor):
|
||||
_API_BASE = 'https://app.pluralsight.com'
|
||||
|
||||
def _download_course(self, course_id, url, display_id):
|
||||
try:
|
||||
return self._download_course_rpc(course_id, url, display_id)
|
||||
except ExtractorError:
|
||||
# Old API fallback
|
||||
return self._download_json(
|
||||
'https://app.pluralsight.com/player/user/api/v1/player/payload',
|
||||
display_id, data=urlencode_postdata({'courseId': course_id}),
|
||||
headers={'Referer': url})
|
||||
|
||||
def _download_course_rpc(self, course_id, url, display_id):
|
||||
response = self._download_json(
|
||||
'%s/player/functions/rpc' % self._API_BASE, display_id,
|
||||
'Downloading course JSON',
|
||||
data=json.dumps({
|
||||
'fn': 'bootstrapPlayer',
|
||||
'payload': {
|
||||
'courseId': course_id,
|
||||
},
|
||||
}).encode('utf-8'),
|
||||
headers={
|
||||
'Content-Type': 'application/json;charset=utf-8',
|
||||
'Referer': url,
|
||||
})
|
||||
|
||||
course = try_get(response, lambda x: x['payload']['course'], dict)
|
||||
if course:
|
||||
return course
|
||||
|
||||
raise ExtractorError(
|
||||
'%s said: %s' % (self.IE_NAME, response['error']['message']),
|
||||
expected=True)
|
||||
|
||||
|
||||
class PluralsightIE(PluralsightBaseIE):
|
||||
IE_NAME = 'pluralsight'
|
||||
@ -162,10 +196,7 @@ class PluralsightIE(PluralsightBaseIE):
|
||||
|
||||
display_id = '%s-%s' % (name, clip_id)
|
||||
|
||||
course = self._download_json(
|
||||
'https://app.pluralsight.com/player/user/api/v1/player/payload',
|
||||
display_id, data=urlencode_postdata({'courseId': course_name}),
|
||||
headers={'Referer': url})
|
||||
course = self._download_course(course_name, url, display_id)
|
||||
|
||||
collection = course['modules']
|
||||
|
||||
@ -224,6 +255,7 @@ class PluralsightIE(PluralsightBaseIE):
|
||||
req_format_split = req_format.split('-', 1)
|
||||
if len(req_format_split) > 1:
|
||||
req_ext, req_quality = req_format_split
|
||||
req_quality = '-'.join(req_quality.split('-')[:2])
|
||||
for allowed_quality in ALLOWED_QUALITIES:
|
||||
if req_ext == allowed_quality.ext and req_quality in allowed_quality.qualities:
|
||||
return (AllowedQuality(req_ext, (req_quality, )), )
|
||||
@ -330,18 +362,7 @@ class PluralsightCourseIE(PluralsightBaseIE):
|
||||
|
||||
# TODO: PSM cookie
|
||||
|
||||
course = self._download_json(
|
||||
'%s/player/functions/rpc' % self._API_BASE, course_id,
|
||||
'Downloading course JSON',
|
||||
data=json.dumps({
|
||||
'fn': 'bootstrapPlayer',
|
||||
'payload': {
|
||||
'courseId': course_id,
|
||||
}
|
||||
}).encode('utf-8'),
|
||||
headers={
|
||||
'Content-Type': 'application/json;charset=utf-8'
|
||||
})['payload']['course']
|
||||
course = self._download_course(course_id, url, course_id)
|
||||
|
||||
title = course['title']
|
||||
course_name = course['name']
|
||||
|
@ -9,39 +9,46 @@ from ..utils import int_or_none
|
||||
|
||||
class PodomaticIE(InfoExtractor):
|
||||
IE_NAME = 'podomatic'
|
||||
_VALID_URL = r'^(?P<proto>https?)://(?P<channel>[^.]+)\.podomatic\.com/entry/(?P<id>[^?]+)'
|
||||
_VALID_URL = r'''(?x)
|
||||
(?P<proto>https?)://
|
||||
(?:
|
||||
(?P<channel>[^.]+)\.podomatic\.com/entry|
|
||||
(?:www\.)?podomatic\.com/podcasts/(?P<channel_2>[^/]+)/episodes
|
||||
)/
|
||||
(?P<id>[^/?#&]+)
|
||||
'''
|
||||
|
||||
_TESTS = [
|
||||
{
|
||||
'url': 'http://scienceteachingtips.podomatic.com/entry/2009-01-02T16_03_35-08_00',
|
||||
'md5': '84bb855fcf3429e6bf72460e1eed782d',
|
||||
'info_dict': {
|
||||
'id': '2009-01-02T16_03_35-08_00',
|
||||
'ext': 'mp3',
|
||||
'uploader': 'Science Teaching Tips',
|
||||
'uploader_id': 'scienceteachingtips',
|
||||
'title': '64. When the Moon Hits Your Eye',
|
||||
'duration': 446,
|
||||
}
|
||||
},
|
||||
{
|
||||
'url': 'http://ostbahnhof.podomatic.com/entry/2013-11-15T16_31_21-08_00',
|
||||
'md5': 'd2cf443931b6148e27638650e2638297',
|
||||
'info_dict': {
|
||||
'id': '2013-11-15T16_31_21-08_00',
|
||||
'ext': 'mp3',
|
||||
'uploader': 'Ostbahnhof / Techno Mix',
|
||||
'uploader_id': 'ostbahnhof',
|
||||
'title': 'Einunddreizig',
|
||||
'duration': 3799,
|
||||
}
|
||||
},
|
||||
]
|
||||
_TESTS = [{
|
||||
'url': 'http://scienceteachingtips.podomatic.com/entry/2009-01-02T16_03_35-08_00',
|
||||
'md5': '84bb855fcf3429e6bf72460e1eed782d',
|
||||
'info_dict': {
|
||||
'id': '2009-01-02T16_03_35-08_00',
|
||||
'ext': 'mp3',
|
||||
'uploader': 'Science Teaching Tips',
|
||||
'uploader_id': 'scienceteachingtips',
|
||||
'title': '64. When the Moon Hits Your Eye',
|
||||
'duration': 446,
|
||||
}
|
||||
}, {
|
||||
'url': 'http://ostbahnhof.podomatic.com/entry/2013-11-15T16_31_21-08_00',
|
||||
'md5': 'd2cf443931b6148e27638650e2638297',
|
||||
'info_dict': {
|
||||
'id': '2013-11-15T16_31_21-08_00',
|
||||
'ext': 'mp3',
|
||||
'uploader': 'Ostbahnhof / Techno Mix',
|
||||
'uploader_id': 'ostbahnhof',
|
||||
'title': 'Einunddreizig',
|
||||
'duration': 3799,
|
||||
}
|
||||
}, {
|
||||
'url': 'https://www.podomatic.com/podcasts/scienceteachingtips/episodes/2009-01-02T16_03_35-08_00',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
video_id = mobj.group('id')
|
||||
channel = mobj.group('channel')
|
||||
channel = mobj.group('channel') or mobj.group('channel_2')
|
||||
|
||||
json_url = (('%s://%s.podomatic.com/entry/embed_params/%s' +
|
||||
'?permalink=true&rtmp=0') %
|
||||
|
@ -54,7 +54,7 @@ class PornHdIE(InfoExtractor):
|
||||
r'<title>(.+?) - .*?[Pp]ornHD.*?</title>'], webpage, 'title')
|
||||
|
||||
sources = self._parse_json(js_to_json(self._search_regex(
|
||||
r"(?s)'sources'\s*:\s*(\{.+?\})\s*\}[;,)]",
|
||||
r"(?s)sources'?\s*:\s*(\{.+?\})\s*\}[;,)]",
|
||||
webpage, 'sources', default='{}')), video_id)
|
||||
|
||||
if not sources:
|
||||
|
@ -186,7 +186,7 @@ class PornHubIE(InfoExtractor):
|
||||
title, thumbnail, duration = [None] * 3
|
||||
|
||||
video_uploader = self._html_search_regex(
|
||||
r'(?s)From: .+?<(?:a href="/users/|a href="/channels/|span class="username)[^>]+>(.+?)<',
|
||||
r'(?s)From: .+?<(?:a\b[^>]+\bhref=["\']/(?:user|channel)s/|span\b[^>]+\bclass=["\']username)[^>]+>(.+?)<',
|
||||
webpage, 'uploader', fatal=False)
|
||||
|
||||
view_count = self._extract_count(
|
||||
@ -227,20 +227,6 @@ class PornHubIE(InfoExtractor):
|
||||
|
||||
class PornHubPlaylistBaseIE(InfoExtractor):
|
||||
def _extract_entries(self, webpage):
|
||||
return [
|
||||
self.url_result(
|
||||
'http://www.pornhub.com/%s' % video_url,
|
||||
PornHubIE.ie_key(), video_title=title)
|
||||
for video_url, title in orderedSet(re.findall(
|
||||
r'href="/?(view_video\.php\?.*\bviewkey=[\da-z]+[^"]*)"[^>]*\s+title="([^"]+)"',
|
||||
webpage))
|
||||
]
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, playlist_id)
|
||||
|
||||
# Only process container div with main playlist content skipping
|
||||
# drop-down menu that uses similar pattern for videos (see
|
||||
# https://github.com/rg3/youtube-dl/issues/11594).
|
||||
@ -248,7 +234,21 @@ class PornHubPlaylistBaseIE(InfoExtractor):
|
||||
r'(?s)(<div[^>]+class=["\']container.+)', webpage,
|
||||
'container', default=webpage)
|
||||
|
||||
entries = self._extract_entries(container)
|
||||
return [
|
||||
self.url_result(
|
||||
'http://www.pornhub.com/%s' % video_url,
|
||||
PornHubIE.ie_key(), video_title=title)
|
||||
for video_url, title in orderedSet(re.findall(
|
||||
r'href="/?(view_video\.php\?.*\bviewkey=[\da-z]+[^"]*)"[^>]*\s+title="([^"]+)"',
|
||||
container))
|
||||
]
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, playlist_id)
|
||||
|
||||
entries = self._extract_entries(webpage)
|
||||
|
||||
playlist = self._parse_json(
|
||||
self._search_regex(
|
||||
|
@ -2,38 +2,37 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import random
|
||||
import time
|
||||
import re
|
||||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
sanitized_Request,
|
||||
strip_jsonp,
|
||||
unescapeHTML,
|
||||
clean_html,
|
||||
ExtractorError,
|
||||
strip_jsonp,
|
||||
unescapeHTML,
|
||||
)
|
||||
|
||||
|
||||
class QQMusicIE(InfoExtractor):
|
||||
IE_NAME = 'qqmusic'
|
||||
IE_DESC = 'QQ音乐'
|
||||
_VALID_URL = r'https?://y\.qq\.com/#type=song&mid=(?P<id>[0-9A-Za-z]+)'
|
||||
_VALID_URL = r'https?://y\.qq\.com/n/yqq/song/(?P<id>[0-9A-Za-z]+)\.html'
|
||||
_TESTS = [{
|
||||
'url': 'http://y.qq.com/#type=song&mid=004295Et37taLD',
|
||||
'md5': '9ce1c1c8445f561506d2e3cfb0255705',
|
||||
'url': 'https://y.qq.com/n/yqq/song/004295Et37taLD.html',
|
||||
'md5': '5f1e6cea39e182857da7ffc5ef5e6bb8',
|
||||
'info_dict': {
|
||||
'id': '004295Et37taLD',
|
||||
'ext': 'mp3',
|
||||
'title': '可惜没如果',
|
||||
'release_date': '20141227',
|
||||
'creator': '林俊杰',
|
||||
'description': 'md5:d327722d0361576fde558f1ac68a7065',
|
||||
'description': 'md5:d85afb3051952ecc50a1ee8a286d1eac',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
}
|
||||
}, {
|
||||
'note': 'There is no mp3-320 version of this song.',
|
||||
'url': 'http://y.qq.com/#type=song&mid=004MsGEo3DdNxV',
|
||||
'url': 'https://y.qq.com/n/yqq/song/004MsGEo3DdNxV.html',
|
||||
'md5': 'fa3926f0c585cda0af8fa4f796482e3e',
|
||||
'info_dict': {
|
||||
'id': '004MsGEo3DdNxV',
|
||||
@ -46,14 +45,14 @@ class QQMusicIE(InfoExtractor):
|
||||
}
|
||||
}, {
|
||||
'note': 'lyrics not in .lrc format',
|
||||
'url': 'http://y.qq.com/#type=song&mid=001JyApY11tIp6',
|
||||
'url': 'https://y.qq.com/n/yqq/song/001JyApY11tIp6.html',
|
||||
'info_dict': {
|
||||
'id': '001JyApY11tIp6',
|
||||
'ext': 'mp3',
|
||||
'title': 'Shadows Over Transylvania',
|
||||
'release_date': '19970225',
|
||||
'creator': 'Dark Funeral',
|
||||
'description': 'md5:ed14d5bd7ecec19609108052c25b2c11',
|
||||
'description': 'md5:c9b20210587cbcd6836a1c597bab4525',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
},
|
||||
'params': {
|
||||
@ -105,7 +104,7 @@ class QQMusicIE(InfoExtractor):
|
||||
[r'albummid:\'([0-9a-zA-Z]+)\'', r'"albummid":"([0-9a-zA-Z]+)"'],
|
||||
detail_info_page, 'album mid', default=None)
|
||||
if albummid:
|
||||
thumbnail_url = "http://i.gtimg.cn/music/photo/mid_album_500/%s/%s/%s.jpg" \
|
||||
thumbnail_url = 'http://i.gtimg.cn/music/photo/mid_album_500/%s/%s/%s.jpg' \
|
||||
% (albummid[-2:-1], albummid[-1], albummid)
|
||||
|
||||
guid = self.m_r_get_ruin()
|
||||
@ -156,15 +155,39 @@ class QQPlaylistBaseIE(InfoExtractor):
|
||||
def qq_static_url(category, mid):
|
||||
return 'http://y.qq.com/y/static/%s/%s/%s/%s.html' % (category, mid[-2], mid[-1], mid)
|
||||
|
||||
@classmethod
|
||||
def get_entries_from_page(cls, page):
|
||||
def get_singer_all_songs(self, singmid, num):
|
||||
return self._download_webpage(
|
||||
r'https://c.y.qq.com/v8/fcg-bin/fcg_v8_singer_track_cp.fcg', singmid,
|
||||
query={
|
||||
'format': 'json',
|
||||
'inCharset': 'utf8',
|
||||
'outCharset': 'utf-8',
|
||||
'platform': 'yqq',
|
||||
'needNewCode': 0,
|
||||
'singermid': singmid,
|
||||
'order': 'listen',
|
||||
'begin': 0,
|
||||
'num': num,
|
||||
'songstatus': 1,
|
||||
})
|
||||
|
||||
def get_entries_from_page(self, singmid):
|
||||
entries = []
|
||||
|
||||
for item in re.findall(r'class="data"[^<>]*>([^<>]+)</', page):
|
||||
song_mid = unescapeHTML(item).split('|')[-5]
|
||||
entries.append(cls.url_result(
|
||||
'http://y.qq.com/#type=song&mid=' + song_mid, 'QQMusic',
|
||||
song_mid))
|
||||
default_num = 1
|
||||
json_text = self.get_singer_all_songs(singmid, default_num)
|
||||
json_obj_all_songs = self._parse_json(json_text, singmid)
|
||||
|
||||
if json_obj_all_songs['code'] == 0:
|
||||
total = json_obj_all_songs['data']['total']
|
||||
json_text = self.get_singer_all_songs(singmid, total)
|
||||
json_obj_all_songs = self._parse_json(json_text, singmid)
|
||||
|
||||
for item in json_obj_all_songs['data']['list']:
|
||||
if item['musicData'].get('songmid') is not None:
|
||||
songmid = item['musicData']['songmid']
|
||||
entries.append(self.url_result(
|
||||
r'https://y.qq.com/n/yqq/song/%s.html' % songmid, 'QQMusic', songmid))
|
||||
|
||||
return entries
|
||||
|
||||
@ -172,42 +195,32 @@ class QQPlaylistBaseIE(InfoExtractor):
|
||||
class QQMusicSingerIE(QQPlaylistBaseIE):
|
||||
IE_NAME = 'qqmusic:singer'
|
||||
IE_DESC = 'QQ音乐 - 歌手'
|
||||
_VALID_URL = r'https?://y\.qq\.com/#type=singer&mid=(?P<id>[0-9A-Za-z]+)'
|
||||
_VALID_URL = r'https?://y\.qq\.com/n/yqq/singer/(?P<id>[0-9A-Za-z]+)\.html'
|
||||
_TEST = {
|
||||
'url': 'http://y.qq.com/#type=singer&mid=001BLpXF2DyJe2',
|
||||
'url': 'https://y.qq.com/n/yqq/singer/001BLpXF2DyJe2.html',
|
||||
'info_dict': {
|
||||
'id': '001BLpXF2DyJe2',
|
||||
'title': '林俊杰',
|
||||
'description': 'md5:870ec08f7d8547c29c93010899103751',
|
||||
},
|
||||
'playlist_count': 12,
|
||||
'playlist_mincount': 12,
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
mid = self._match_id(url)
|
||||
|
||||
singer_page = self._download_webpage(
|
||||
self.qq_static_url('singer', mid), mid, 'Download singer page')
|
||||
|
||||
entries = self.get_entries_from_page(singer_page)
|
||||
|
||||
entries = self.get_entries_from_page(mid)
|
||||
singer_page = self._download_webpage(url, mid, 'Download singer page')
|
||||
singer_name = self._html_search_regex(
|
||||
r"singername\s*:\s*'([^']+)'", singer_page, 'singer name',
|
||||
default=None)
|
||||
|
||||
singer_id = self._html_search_regex(
|
||||
r"singerid\s*:\s*'([0-9]+)'", singer_page, 'singer id',
|
||||
default=None)
|
||||
|
||||
r"singername\s*:\s*'(.*?)'", singer_page, 'singer name', default=None)
|
||||
singer_desc = None
|
||||
|
||||
if singer_id:
|
||||
req = sanitized_Request(
|
||||
'http://s.plcloud.music.qq.com/fcgi-bin/fcg_get_singer_desc.fcg?utf8=1&outCharset=utf-8&format=xml&singerid=%s' % singer_id)
|
||||
req.add_header(
|
||||
'Referer', 'http://s.plcloud.music.qq.com/xhr_proxy_utf8.html')
|
||||
if mid:
|
||||
singer_desc_page = self._download_xml(
|
||||
req, mid, 'Donwload singer description XML')
|
||||
'http://s.plcloud.music.qq.com/fcgi-bin/fcg_get_singer_desc.fcg', mid,
|
||||
'Donwload singer description XML',
|
||||
query={'utf8': 1, 'outCharset': 'utf-8', 'format': 'xml', 'singermid': mid},
|
||||
headers={'Referer': 'https://y.qq.com/n/yqq/singer/'})
|
||||
|
||||
singer_desc = singer_desc_page.find('./data/info/desc').text
|
||||
|
||||
@ -217,10 +230,10 @@ class QQMusicSingerIE(QQPlaylistBaseIE):
|
||||
class QQMusicAlbumIE(QQPlaylistBaseIE):
|
||||
IE_NAME = 'qqmusic:album'
|
||||
IE_DESC = 'QQ音乐 - 专辑'
|
||||
_VALID_URL = r'https?://y\.qq\.com/#type=album&mid=(?P<id>[0-9A-Za-z]+)'
|
||||
_VALID_URL = r'https?://y\.qq\.com/n/yqq/album/(?P<id>[0-9A-Za-z]+)\.html'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://y.qq.com/#type=album&mid=000gXCTb2AhRR1',
|
||||
'url': 'https://y.qq.com/n/yqq/album/000gXCTb2AhRR1.html',
|
||||
'info_dict': {
|
||||
'id': '000gXCTb2AhRR1',
|
||||
'title': '我们都是这样长大的',
|
||||
@ -228,7 +241,7 @@ class QQMusicAlbumIE(QQPlaylistBaseIE):
|
||||
},
|
||||
'playlist_count': 4,
|
||||
}, {
|
||||
'url': 'http://y.qq.com/#type=album&mid=002Y5a3b3AlCu3',
|
||||
'url': 'https://y.qq.com/n/yqq/album/002Y5a3b3AlCu3.html',
|
||||
'info_dict': {
|
||||
'id': '002Y5a3b3AlCu3',
|
||||
'title': '그리고...',
|
||||
@ -246,7 +259,7 @@ class QQMusicAlbumIE(QQPlaylistBaseIE):
|
||||
|
||||
entries = [
|
||||
self.url_result(
|
||||
'http://y.qq.com/#type=song&mid=' + song['songmid'], 'QQMusic', song['songmid']
|
||||
'https://y.qq.com/n/yqq/song/' + song['songmid'] + '.html', 'QQMusic', song['songmid']
|
||||
) for song in album['list']
|
||||
]
|
||||
album_name = album.get('name')
|
||||
@ -260,31 +273,30 @@ class QQMusicAlbumIE(QQPlaylistBaseIE):
|
||||
class QQMusicToplistIE(QQPlaylistBaseIE):
|
||||
IE_NAME = 'qqmusic:toplist'
|
||||
IE_DESC = 'QQ音乐 - 排行榜'
|
||||
_VALID_URL = r'https?://y\.qq\.com/#type=toplist&p=(?P<id>(top|global)_[0-9]+)'
|
||||
_VALID_URL = r'https?://y\.qq\.com/n/yqq/toplist/(?P<id>[0-9]+)\.html'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://y.qq.com/#type=toplist&p=global_123',
|
||||
'url': 'https://y.qq.com/n/yqq/toplist/123.html',
|
||||
'info_dict': {
|
||||
'id': 'global_123',
|
||||
'id': '123',
|
||||
'title': '美国iTunes榜',
|
||||
},
|
||||
'playlist_count': 10,
|
||||
}, {
|
||||
'url': 'http://y.qq.com/#type=toplist&p=top_3',
|
||||
'info_dict': {
|
||||
'id': 'top_3',
|
||||
'title': '巅峰榜·欧美',
|
||||
'description': 'QQ音乐巅峰榜·欧美根据用户收听行为自动生成,集结当下最流行的欧美新歌!:更新时间:每周四22点|统'
|
||||
'计周期:一周(上周四至本周三)|统计对象:三个月内发行的欧美歌曲|统计数量:100首|统计算法:根据'
|
||||
'歌曲在一周内的有效播放次数,由高到低取前100名(同一歌手最多允许5首歌曲同时上榜)|有效播放次数:'
|
||||
'登录用户完整播放一首歌曲,记为一次有效播放;同一用户收听同一首歌曲,每天记录为1次有效播放'
|
||||
'description': 'md5:89db2335fdbb10678dee2d43fe9aba08',
|
||||
},
|
||||
'playlist_count': 100,
|
||||
}, {
|
||||
'url': 'http://y.qq.com/#type=toplist&p=global_106',
|
||||
'url': 'https://y.qq.com/n/yqq/toplist/3.html',
|
||||
'info_dict': {
|
||||
'id': 'global_106',
|
||||
'id': '3',
|
||||
'title': '巅峰榜·欧美',
|
||||
'description': 'md5:5a600d42c01696b26b71f8c4d43407da',
|
||||
},
|
||||
'playlist_count': 100,
|
||||
}, {
|
||||
'url': 'https://y.qq.com/n/yqq/toplist/106.html',
|
||||
'info_dict': {
|
||||
'id': '106',
|
||||
'title': '韩国Mnet榜',
|
||||
'description': 'md5:cb84b325215e1d21708c615cac82a6e7',
|
||||
},
|
||||
'playlist_count': 50,
|
||||
}]
|
||||
@ -292,18 +304,15 @@ class QQMusicToplistIE(QQPlaylistBaseIE):
|
||||
def _real_extract(self, url):
|
||||
list_id = self._match_id(url)
|
||||
|
||||
list_type, num_id = list_id.split("_")
|
||||
|
||||
toplist_json = self._download_json(
|
||||
'http://i.y.qq.com/v8/fcg-bin/fcg_v8_toplist_cp.fcg?type=%s&topid=%s&format=json'
|
||||
% (list_type, num_id),
|
||||
list_id, 'Download toplist page')
|
||||
'http://i.y.qq.com/v8/fcg-bin/fcg_v8_toplist_cp.fcg', list_id,
|
||||
note='Download toplist page',
|
||||
query={'type': 'toplist', 'topid': list_id, 'format': 'json'})
|
||||
|
||||
entries = [
|
||||
self.url_result(
|
||||
'http://y.qq.com/#type=song&mid=' + song['data']['songmid'], 'QQMusic', song['data']['songmid']
|
||||
) for song in toplist_json['songlist']
|
||||
]
|
||||
entries = [self.url_result(
|
||||
'https://y.qq.com/n/yqq/song/' + song['data']['songmid'] + '.html', 'QQMusic',
|
||||
song['data']['songmid'])
|
||||
for song in toplist_json['songlist']]
|
||||
|
||||
topinfo = toplist_json.get('topinfo', {})
|
||||
list_name = topinfo.get('ListName')
|
||||
@ -314,10 +323,10 @@ class QQMusicToplistIE(QQPlaylistBaseIE):
|
||||
class QQMusicPlaylistIE(QQPlaylistBaseIE):
|
||||
IE_NAME = 'qqmusic:playlist'
|
||||
IE_DESC = 'QQ音乐 - 歌单'
|
||||
_VALID_URL = r'https?://y\.qq\.com/#type=taoge&id=(?P<id>[0-9]+)'
|
||||
_VALID_URL = r'https?://y\.qq\.com/n/yqq/playlist/(?P<id>[0-9]+)\.html'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://y.qq.com/#type=taoge&id=3462654915',
|
||||
'url': 'http://y.qq.com/n/yqq/playlist/3462654915.html',
|
||||
'info_dict': {
|
||||
'id': '3462654915',
|
||||
'title': '韩国5月新歌精选下旬',
|
||||
@ -326,7 +335,7 @@ class QQMusicPlaylistIE(QQPlaylistBaseIE):
|
||||
'playlist_count': 40,
|
||||
'skip': 'playlist gone',
|
||||
}, {
|
||||
'url': 'http://y.qq.com/#type=taoge&id=1374105607',
|
||||
'url': 'https://y.qq.com/n/yqq/playlist/1374105607.html',
|
||||
'info_dict': {
|
||||
'id': '1374105607',
|
||||
'title': '易入人心的华语民谣',
|
||||
@ -339,8 +348,9 @@ class QQMusicPlaylistIE(QQPlaylistBaseIE):
|
||||
list_id = self._match_id(url)
|
||||
|
||||
list_json = self._download_json(
|
||||
'http://i.y.qq.com/qzone-music/fcg-bin/fcg_ucc_getcdinfo_byids_cp.fcg?type=1&json=1&utf8=1&onlysong=0&disstid=%s'
|
||||
% list_id, list_id, 'Download list page',
|
||||
'http://i.y.qq.com/qzone-music/fcg-bin/fcg_ucc_getcdinfo_byids_cp.fcg',
|
||||
list_id, 'Download list page',
|
||||
query={'type': 1, 'json': 1, 'utf8': 1, 'onlysong': 0, 'disstid': list_id},
|
||||
transform_source=strip_jsonp)
|
||||
if not len(list_json.get('cdlist', [])):
|
||||
if list_json.get('code'):
|
||||
@ -350,11 +360,9 @@ class QQMusicPlaylistIE(QQPlaylistBaseIE):
|
||||
raise ExtractorError('Unable to get playlist info')
|
||||
|
||||
cdlist = list_json['cdlist'][0]
|
||||
entries = [
|
||||
self.url_result(
|
||||
'http://y.qq.com/#type=song&mid=' + song['songmid'], 'QQMusic', song['songmid']
|
||||
) for song in cdlist['songlist']
|
||||
]
|
||||
entries = [self.url_result(
|
||||
'https://y.qq.com/n/yqq/song/' + song['songmid'] + '.html', 'QQMusic', song['songmid'])
|
||||
for song in cdlist['songlist']]
|
||||
|
||||
list_name = cdlist.get('dissname')
|
||||
list_description = clean_html(unescapeHTML(cdlist.get('desc')))
|
||||
|
114
youtube_dl/extractor/reddit.py
Normal file
114
youtube_dl/extractor/reddit.py
Normal file
@ -0,0 +1,114 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
float_or_none,
|
||||
)
|
||||
|
||||
|
||||
class RedditIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://v\.redd\.it/(?P<id>[^/?#&]+)'
|
||||
_TEST = {
|
||||
# from https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/
|
||||
'url': 'https://v.redd.it/zv89llsvexdz',
|
||||
'md5': '655d06ace653ea3b87bccfb1b27ec99d',
|
||||
'info_dict': {
|
||||
'id': 'zv89llsvexdz',
|
||||
'ext': 'mp4',
|
||||
'title': 'zv89llsvexdz',
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
},
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
formats = self._extract_m3u8_formats(
|
||||
'https://v.redd.it/%s/HLSPlaylist.m3u8' % video_id, video_id,
|
||||
'mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)
|
||||
|
||||
formats.extend(self._extract_mpd_formats(
|
||||
'https://v.redd.it/%s/DASHPlaylist.mpd' % video_id, video_id,
|
||||
mpd_id='dash', fatal=False))
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': video_id,
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
|
||||
class RedditRIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?reddit\.com/r/[^/]+/comments/(?P<id>[^/]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/',
|
||||
'info_dict': {
|
||||
'id': 'zv89llsvexdz',
|
||||
'ext': 'mp4',
|
||||
'title': 'That small heart attack.',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'timestamp': 1501941939,
|
||||
'upload_date': '20170805',
|
||||
'uploader': 'Antw87',
|
||||
'like_count': int,
|
||||
'dislike_count': int,
|
||||
'comment_count': int,
|
||||
'age_limit': 0,
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.reddit.com/r/videos/comments/6rrwyj',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# imgur
|
||||
'url': 'https://www.reddit.com/r/MadeMeSmile/comments/6t7wi5/wait_for_it/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# streamable
|
||||
'url': 'https://www.reddit.com/r/videos/comments/6t7sg9/comedians_hilarious_joke_about_the_guam_flag/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# youtube
|
||||
'url': 'https://www.reddit.com/r/videos/comments/6t75wq/southern_man_tries_to_speak_without_an_accent/',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
data = self._download_json(
|
||||
url + '.json', video_id)[0]['data']['children'][0]['data']
|
||||
|
||||
video_url = data['url']
|
||||
|
||||
# Avoid recursing into the same reddit URL
|
||||
if 'reddit.com/' in video_url and '/%s/' % video_id in video_url:
|
||||
raise ExtractorError('No media found', expected=True)
|
||||
|
||||
over_18 = data.get('over_18')
|
||||
if over_18 is True:
|
||||
age_limit = 18
|
||||
elif over_18 is False:
|
||||
age_limit = 0
|
||||
else:
|
||||
age_limit = None
|
||||
|
||||
return {
|
||||
'_type': 'url_transparent',
|
||||
'url': video_url,
|
||||
'title': data.get('title'),
|
||||
'thumbnail': data.get('thumbnail'),
|
||||
'timestamp': float_or_none(data.get('created_utc')),
|
||||
'uploader': data.get('author'),
|
||||
'like_count': int_or_none(data.get('ups')),
|
||||
'dislike_count': int_or_none(data.get('downs')),
|
||||
'comment_count': int_or_none(data.get('num_comments')),
|
||||
'age_limit': age_limit,
|
||||
}
|
@ -31,7 +31,7 @@ class SlideshareIE(InfoExtractor):
|
||||
page_title = mobj.group('title')
|
||||
webpage = self._download_webpage(url, page_title)
|
||||
slideshare_obj = self._search_regex(
|
||||
r'\$\.extend\(slideshare_object,\s*(\{.*?\})\);',
|
||||
r'\$\.extend\(.*?slideshare_object,\s*(\{.*?\})\);',
|
||||
webpage, 'slideshare object')
|
||||
info = json.loads(slideshare_obj)
|
||||
if info['slideshow']['type'] != 'video':
|
||||
|
@ -31,6 +31,7 @@ class SoundcloudIE(InfoExtractor):
|
||||
|
||||
_VALID_URL = r'''(?x)^(?:https?://)?
|
||||
(?:(?:(?:www\.|m\.)?soundcloud\.com/
|
||||
(?!stations/track)
|
||||
(?P<uploader>[\w\d-]+)/
|
||||
(?!(?:tracks|sets(?:/.+?)?|reposts|likes|spotlight)/?(?:$|[?#]))
|
||||
(?P<title>[\w\d-]+)/?
|
||||
@ -121,7 +122,7 @@ class SoundcloudIE(InfoExtractor):
|
||||
},
|
||||
]
|
||||
|
||||
_CLIENT_ID = '2t9loNQH90kzJcsFCODdigxfp325aq4z'
|
||||
_CLIENT_ID = 'JlZIsxg2hY5WnBgtn3jfS0UYCl0K8DOg'
|
||||
_IPHONE_CLIENT_ID = '376f225bf427445fc4bfb6b99b72e0bf'
|
||||
|
||||
@staticmethod
|
||||
@ -330,7 +331,63 @@ class SoundcloudSetIE(SoundcloudPlaylistBaseIE):
|
||||
}
|
||||
|
||||
|
||||
class SoundcloudUserIE(SoundcloudPlaylistBaseIE):
|
||||
class SoundcloudPagedPlaylistBaseIE(SoundcloudPlaylistBaseIE):
|
||||
_API_BASE = 'https://api.soundcloud.com'
|
||||
_API_V2_BASE = 'https://api-v2.soundcloud.com'
|
||||
|
||||
def _extract_playlist(self, base_url, playlist_id, playlist_title):
|
||||
COMMON_QUERY = {
|
||||
'limit': 50,
|
||||
'client_id': self._CLIENT_ID,
|
||||
'linked_partitioning': '1',
|
||||
}
|
||||
|
||||
query = COMMON_QUERY.copy()
|
||||
query['offset'] = 0
|
||||
|
||||
next_href = base_url + '?' + compat_urllib_parse_urlencode(query)
|
||||
|
||||
entries = []
|
||||
for i in itertools.count():
|
||||
response = self._download_json(
|
||||
next_href, playlist_id, 'Downloading track page %s' % (i + 1))
|
||||
|
||||
collection = response['collection']
|
||||
if not collection:
|
||||
break
|
||||
|
||||
def resolve_permalink_url(candidates):
|
||||
for cand in candidates:
|
||||
if isinstance(cand, dict):
|
||||
permalink_url = cand.get('permalink_url')
|
||||
entry_id = self._extract_id(cand)
|
||||
if permalink_url and permalink_url.startswith('http'):
|
||||
return permalink_url, entry_id
|
||||
|
||||
for e in collection:
|
||||
permalink_url, entry_id = resolve_permalink_url((e, e.get('track'), e.get('playlist')))
|
||||
if permalink_url:
|
||||
entries.append(self.url_result(permalink_url, video_id=entry_id))
|
||||
|
||||
next_href = response.get('next_href')
|
||||
if not next_href:
|
||||
break
|
||||
|
||||
parsed_next_href = compat_urlparse.urlparse(response['next_href'])
|
||||
qs = compat_urlparse.parse_qs(parsed_next_href.query)
|
||||
qs.update(COMMON_QUERY)
|
||||
next_href = compat_urlparse.urlunparse(
|
||||
parsed_next_href._replace(query=compat_urllib_parse_urlencode(qs, True)))
|
||||
|
||||
return {
|
||||
'_type': 'playlist',
|
||||
'id': playlist_id,
|
||||
'title': playlist_title,
|
||||
'entries': entries,
|
||||
}
|
||||
|
||||
|
||||
class SoundcloudUserIE(SoundcloudPagedPlaylistBaseIE):
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://
|
||||
(?:(?:www|m)\.)?soundcloud\.com/
|
||||
@ -385,16 +442,13 @@ class SoundcloudUserIE(SoundcloudPlaylistBaseIE):
|
||||
'playlist_mincount': 1,
|
||||
}]
|
||||
|
||||
_API_BASE = 'https://api.soundcloud.com'
|
||||
_API_V2_BASE = 'https://api-v2.soundcloud.com'
|
||||
|
||||
_BASE_URL_MAP = {
|
||||
'all': '%s/profile/soundcloud:users:%%s' % _API_V2_BASE,
|
||||
'tracks': '%s/users/%%s/tracks' % _API_BASE,
|
||||
'sets': '%s/users/%%s/playlists' % _API_V2_BASE,
|
||||
'reposts': '%s/profile/soundcloud:users:%%s/reposts' % _API_V2_BASE,
|
||||
'likes': '%s/users/%%s/likes' % _API_V2_BASE,
|
||||
'spotlight': '%s/users/%%s/spotlight' % _API_V2_BASE,
|
||||
'all': '%s/profile/soundcloud:users:%%s' % SoundcloudPagedPlaylistBaseIE._API_V2_BASE,
|
||||
'tracks': '%s/users/%%s/tracks' % SoundcloudPagedPlaylistBaseIE._API_BASE,
|
||||
'sets': '%s/users/%%s/playlists' % SoundcloudPagedPlaylistBaseIE._API_V2_BASE,
|
||||
'reposts': '%s/profile/soundcloud:users:%%s/reposts' % SoundcloudPagedPlaylistBaseIE._API_V2_BASE,
|
||||
'likes': '%s/users/%%s/likes' % SoundcloudPagedPlaylistBaseIE._API_V2_BASE,
|
||||
'spotlight': '%s/users/%%s/spotlight' % SoundcloudPagedPlaylistBaseIE._API_V2_BASE,
|
||||
}
|
||||
|
||||
_TITLE_MAP = {
|
||||
@ -416,57 +470,36 @@ class SoundcloudUserIE(SoundcloudPlaylistBaseIE):
|
||||
resolv_url, uploader, 'Downloading user info')
|
||||
|
||||
resource = mobj.group('rsrc') or 'all'
|
||||
base_url = self._BASE_URL_MAP[resource] % user['id']
|
||||
|
||||
COMMON_QUERY = {
|
||||
'limit': 50,
|
||||
'client_id': self._CLIENT_ID,
|
||||
'linked_partitioning': '1',
|
||||
}
|
||||
return self._extract_playlist(
|
||||
self._BASE_URL_MAP[resource] % user['id'], compat_str(user['id']),
|
||||
'%s (%s)' % (user['username'], self._TITLE_MAP[resource]))
|
||||
|
||||
query = COMMON_QUERY.copy()
|
||||
query['offset'] = 0
|
||||
|
||||
next_href = base_url + '?' + compat_urllib_parse_urlencode(query)
|
||||
class SoundcloudTrackStationIE(SoundcloudPagedPlaylistBaseIE):
|
||||
_VALID_URL = r'https?://(?:(?:www|m)\.)?soundcloud\.com/stations/track/[^/]+/(?P<id>[^/?#&]+)'
|
||||
IE_NAME = 'soundcloud:trackstation'
|
||||
_TESTS = [{
|
||||
'url': 'https://soundcloud.com/stations/track/officialsundial/your-text',
|
||||
'info_dict': {
|
||||
'id': '286017854',
|
||||
'title': 'Track station: your-text',
|
||||
},
|
||||
'playlist_mincount': 47,
|
||||
}]
|
||||
|
||||
entries = []
|
||||
for i in itertools.count():
|
||||
response = self._download_json(
|
||||
next_href, uploader, 'Downloading track page %s' % (i + 1))
|
||||
def _real_extract(self, url):
|
||||
track_name = self._match_id(url)
|
||||
|
||||
collection = response['collection']
|
||||
if not collection:
|
||||
break
|
||||
webpage = self._download_webpage(url, track_name)
|
||||
|
||||
def resolve_permalink_url(candidates):
|
||||
for cand in candidates:
|
||||
if isinstance(cand, dict):
|
||||
permalink_url = cand.get('permalink_url')
|
||||
entry_id = self._extract_id(cand)
|
||||
if permalink_url and permalink_url.startswith('http'):
|
||||
return permalink_url, entry_id
|
||||
track_id = self._search_regex(
|
||||
r'soundcloud:track-stations:(\d+)', webpage, 'track id')
|
||||
|
||||
for e in collection:
|
||||
permalink_url, entry_id = resolve_permalink_url((e, e.get('track'), e.get('playlist')))
|
||||
if permalink_url:
|
||||
entries.append(self.url_result(permalink_url, video_id=entry_id))
|
||||
|
||||
next_href = response.get('next_href')
|
||||
if not next_href:
|
||||
break
|
||||
|
||||
parsed_next_href = compat_urlparse.urlparse(response['next_href'])
|
||||
qs = compat_urlparse.parse_qs(parsed_next_href.query)
|
||||
qs.update(COMMON_QUERY)
|
||||
next_href = compat_urlparse.urlunparse(
|
||||
parsed_next_href._replace(query=compat_urllib_parse_urlencode(qs, True)))
|
||||
|
||||
return {
|
||||
'_type': 'playlist',
|
||||
'id': compat_str(user['id']),
|
||||
'title': '%s (%s)' % (user['username'], self._TITLE_MAP[resource]),
|
||||
'entries': entries,
|
||||
}
|
||||
return self._extract_playlist(
|
||||
'%s/stations/soundcloud:track-stations:%s/tracks'
|
||||
% (self._API_V2_BASE, track_id),
|
||||
track_id, 'Track station: %s' % track_name)
|
||||
|
||||
|
||||
class SoundcloudPlaylistIE(SoundcloudPlaylistBaseIE):
|
||||
|
@ -4,6 +4,7 @@ from __future__ import unicode_literals
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from .nexx import NexxEmbedIE
|
||||
from .spiegeltv import SpiegeltvIE
|
||||
from ..compat import compat_urlparse
|
||||
from ..utils import (
|
||||
@ -121,6 +122,26 @@ class SpiegelArticleIE(InfoExtractor):
|
||||
|
||||
},
|
||||
'playlist_count': 6,
|
||||
}, {
|
||||
# Nexx iFrame embed
|
||||
'url': 'http://www.spiegel.de/sptv/spiegeltv/spiegel-tv-ueber-schnellste-katapult-achterbahn-der-welt-taron-a-1137884.html',
|
||||
'info_dict': {
|
||||
'id': '161464',
|
||||
'ext': 'mp4',
|
||||
'title': 'Nervenkitzel Achterbahn',
|
||||
'alt_title': 'Karussellbauer in Deutschland',
|
||||
'description': 'md5:ffe7b1cc59a01f585e0569949aef73cc',
|
||||
'release_year': 2005,
|
||||
'creator': 'SPIEGEL TV',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 2761,
|
||||
'timestamp': 1394021479,
|
||||
'upload_date': '20140305',
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
'skip_download': True,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
@ -143,6 +164,9 @@ class SpiegelArticleIE(InfoExtractor):
|
||||
entries = [
|
||||
self.url_result(compat_urlparse.urljoin(
|
||||
self.http_scheme() + '//spiegel.de/', embed_path))
|
||||
for embed_path in embeds
|
||||
]
|
||||
return self.playlist_result(entries)
|
||||
for embed_path in embeds]
|
||||
if embeds:
|
||||
return self.playlist_result(entries)
|
||||
|
||||
return self.playlist_from_matches(
|
||||
NexxEmbedIE._extract_urls(webpage), ie=NexxEmbedIE.ie_key())
|
||||
|
@ -1,114 +1,17 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_urllib_parse_urlparse
|
||||
from ..utils import (
|
||||
determine_ext,
|
||||
float_or_none,
|
||||
)
|
||||
from .nexx import NexxIE
|
||||
|
||||
|
||||
class SpiegeltvIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?spiegel\.tv/(?:#/)?filme/(?P<id>[\-a-z0-9]+)'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.spiegel.tv/filme/flug-mh370/',
|
||||
'info_dict': {
|
||||
'id': 'flug-mh370',
|
||||
'ext': 'm4v',
|
||||
'title': 'Flug MH370',
|
||||
'description': 'Das Rätsel um die Boeing 777 der Malaysia-Airlines',
|
||||
'thumbnail': r're:http://.*\.jpg$',
|
||||
},
|
||||
'params': {
|
||||
# m3u8 download
|
||||
'skip_download': True,
|
||||
}
|
||||
}, {
|
||||
'url': 'http://www.spiegel.tv/#/filme/alleskino-die-wahrheit-ueber-maenner/',
|
||||
_VALID_URL = r'https?://(?:www\.)?spiegel\.tv/videos/(?P<id>\d+)'
|
||||
_TEST = {
|
||||
'url': 'http://www.spiegel.tv/videos/161681-flug-mh370/',
|
||||
'only_matching': True,
|
||||
}]
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
if '/#/' in url:
|
||||
url = url.replace('/#/', '/')
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
title = self._html_search_regex(r'<h1.*?>(.*?)</h1>', webpage, 'title')
|
||||
|
||||
apihost = 'http://spiegeltv-ivms2-restapi.s3.amazonaws.com'
|
||||
version_json = self._download_json(
|
||||
'%s/version.json' % apihost, video_id,
|
||||
note='Downloading version information')
|
||||
version_name = version_json['version_name']
|
||||
|
||||
slug_json = self._download_json(
|
||||
'%s/%s/restapi/slugs/%s.json' % (apihost, version_name, video_id),
|
||||
video_id,
|
||||
note='Downloading object information')
|
||||
oid = slug_json['object_id']
|
||||
|
||||
media_json = self._download_json(
|
||||
'%s/%s/restapi/media/%s.json' % (apihost, version_name, oid),
|
||||
video_id, note='Downloading media information')
|
||||
uuid = media_json['uuid']
|
||||
is_wide = media_json['is_wide']
|
||||
|
||||
server_json = self._download_json(
|
||||
'http://spiegeltv-prod-static.s3.amazonaws.com/projectConfigs/projectConfig.json',
|
||||
video_id, note='Downloading server information')
|
||||
|
||||
format = '16x9' if is_wide else '4x3'
|
||||
|
||||
formats = []
|
||||
for streamingserver in server_json['streamingserver']:
|
||||
endpoint = streamingserver.get('endpoint')
|
||||
if not endpoint:
|
||||
continue
|
||||
play_path = 'mp4:%s_spiegeltv_0500_%s.m4v' % (uuid, format)
|
||||
if endpoint.startswith('rtmp'):
|
||||
formats.append({
|
||||
'url': endpoint,
|
||||
'format_id': 'rtmp',
|
||||
'app': compat_urllib_parse_urlparse(endpoint).path[1:],
|
||||
'play_path': play_path,
|
||||
'player_path': 'http://prod-static.spiegel.tv/frontend-076.swf',
|
||||
'ext': 'flv',
|
||||
'rtmp_live': True,
|
||||
})
|
||||
elif determine_ext(endpoint) == 'm3u8':
|
||||
formats.append({
|
||||
'url': endpoint.replace('[video]', play_path),
|
||||
'ext': 'm4v',
|
||||
'format_id': 'hls', # Prefer hls since it allows to workaround georestriction
|
||||
'protocol': 'm3u8',
|
||||
'preference': 1,
|
||||
'http_headers': {
|
||||
'Accept-Encoding': 'deflate', # gzip causes trouble on the server side
|
||||
},
|
||||
})
|
||||
else:
|
||||
formats.append({
|
||||
'url': endpoint,
|
||||
})
|
||||
self._check_formats(formats, video_id)
|
||||
|
||||
thumbnails = []
|
||||
for image in media_json['images']:
|
||||
thumbnails.append({
|
||||
'url': image['url'],
|
||||
'width': image['width'],
|
||||
'height': image['height'],
|
||||
})
|
||||
|
||||
description = media_json['subtitle']
|
||||
duration = float_or_none(media_json.get('duration_in_ms'), scale=1000)
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'description': description,
|
||||
'duration': duration,
|
||||
'thumbnails': thumbnails,
|
||||
'formats': formats,
|
||||
}
|
||||
return self.url_result(
|
||||
'https://api.nexx.cloud/v3/748/videos/byid/%s'
|
||||
% self._match_id(url), ie=NexxIE.ie_key())
|
||||
|
@ -4,7 +4,11 @@ from __future__ import unicode_literals
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import js_to_json
|
||||
from ..utils import (
|
||||
determine_ext,
|
||||
int_or_none,
|
||||
js_to_json,
|
||||
)
|
||||
|
||||
|
||||
class SportBoxEmbedIE(InfoExtractor):
|
||||
@ -14,8 +18,10 @@ class SportBoxEmbedIE(InfoExtractor):
|
||||
'info_dict': {
|
||||
'id': '211355',
|
||||
'ext': 'mp4',
|
||||
'title': 'В Новороссийске прошел детский турнир «Поле славы боевой»',
|
||||
'title': '211355',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 292,
|
||||
'view_count': int,
|
||||
},
|
||||
'params': {
|
||||
# m3u8 download
|
||||
@ -24,6 +30,9 @@ class SportBoxEmbedIE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'http://news.sportbox.ru/vdl/player?nid=370908&only_player=1&autostart=false&playeri=2&height=340&width=580',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://news.sportbox.ru/vdl/player/media/193095',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
@ -37,36 +46,34 @@ class SportBoxEmbedIE(InfoExtractor):
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
wjplayer_data = self._parse_json(
|
||||
self._search_regex(
|
||||
r'(?s)wjplayer\(({.+?})\);', webpage, 'wjplayer settings'),
|
||||
video_id, transform_source=js_to_json)
|
||||
|
||||
formats = []
|
||||
|
||||
def cleanup_js(code):
|
||||
# desktop_advert_config contains complex Javascripts and we don't need it
|
||||
return js_to_json(re.sub(r'desktop_advert_config.*', '', code))
|
||||
|
||||
jwplayer_data = self._parse_json(self._search_regex(
|
||||
r'(?s)player\.setup\(({.+?})\);', webpage, 'jwplayer settings'), video_id,
|
||||
transform_source=cleanup_js)
|
||||
|
||||
hls_url = jwplayer_data.get('hls_url')
|
||||
if hls_url:
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
hls_url, video_id, ext='mp4', m3u8_id='hls'))
|
||||
|
||||
rtsp_url = jwplayer_data.get('rtsp_url')
|
||||
if rtsp_url:
|
||||
formats.append({
|
||||
'url': rtsp_url,
|
||||
'format_id': 'rtsp',
|
||||
})
|
||||
|
||||
for source in wjplayer_data['sources']:
|
||||
src = source.get('src')
|
||||
if not src:
|
||||
continue
|
||||
if determine_ext(src) == 'm3u8':
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
src, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||
m3u8_id='hls', fatal=False))
|
||||
else:
|
||||
formats.append({
|
||||
'url': src,
|
||||
})
|
||||
self._sort_formats(formats)
|
||||
|
||||
title = jwplayer_data['node_title']
|
||||
thumbnail = jwplayer_data.get('image_url')
|
||||
view_count = int_or_none(self._search_regex(
|
||||
r'Просмотров\s*:\s*(\d+)', webpage, 'view count', default=None))
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'thumbnail': thumbnail,
|
||||
'title': video_id,
|
||||
'thumbnail': wjplayer_data.get('poster'),
|
||||
'duration': int_or_none(wjplayer_data.get('duration')),
|
||||
'view_count': view_count,
|
||||
'formats': formats,
|
||||
}
|
||||
|
@ -181,7 +181,8 @@ class SVTPlayIE(SVTBaseIE):
|
||||
|
||||
if video_id:
|
||||
data = self._download_json(
|
||||
'http://www.svt.se/videoplayer-api/video/%s' % video_id, video_id)
|
||||
'https://api.svt.se/videoplayer-api/video/%s' % video_id,
|
||||
video_id, headers=self.geo_verification_headers())
|
||||
info_dict = self._extract_video(data, video_id)
|
||||
if not info_dict.get('title'):
|
||||
info_dict['title'] = re.sub(
|
||||
|
@ -8,6 +8,9 @@ from ..utils import extract_attributes
|
||||
|
||||
|
||||
class TBSIE(TurnerBaseIE):
|
||||
# https://github.com/rg3/youtube-dl/issues/13658
|
||||
_WORKING = False
|
||||
|
||||
_VALID_URL = r'https?://(?:www\.)?(?P<site>tbs|tntdrama)\.com/videos/(?:[^/]+/)+(?P<id>[^/?#]+)\.html'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.tbs.com/videos/people-of-earth/season-1/extras/2007318/theatrical-trailer.html',
|
||||
@ -17,7 +20,8 @@ class TBSIE(TurnerBaseIE):
|
||||
'ext': 'mp4',
|
||||
'title': 'Theatrical Trailer',
|
||||
'description': 'Catch the latest comedy from TBS, People of Earth, premiering Halloween night--Monday, October 31, at 9/8c.',
|
||||
}
|
||||
},
|
||||
'skip': 'TBS videos are deleted after a while',
|
||||
}, {
|
||||
'url': 'http://www.tntdrama.com/videos/good-behavior/season-1/extras/1538823/you-better-run.html',
|
||||
'md5': 'ce53c6ead5e9f3280b4ad2031a6fab56',
|
||||
@ -26,7 +30,8 @@ class TBSIE(TurnerBaseIE):
|
||||
'ext': 'mp4',
|
||||
'title': 'You Better Run',
|
||||
'description': 'Letty Raines must figure out what she\'s running toward while running away from her past. Good Behavior premieres November 15 at 9/8c.',
|
||||
}
|
||||
},
|
||||
'skip': 'TBS videos are deleted after a while',
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
|
@ -1,48 +0,0 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from .jwplatform import JWPlatformIE
|
||||
from ..utils import unified_strdate
|
||||
|
||||
|
||||
class TeamFourStarIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?teamfourstar\.com/(?P<id>[a-z0-9\-]+)'
|
||||
_TEST = {
|
||||
'url': 'http://teamfourstar.com/tfs-abridged-parody-episode-1-2/',
|
||||
'info_dict': {
|
||||
'id': '0WdZO31W',
|
||||
'title': 'TFS Abridged Parody Episode 1',
|
||||
'description': 'md5:d60bc389588ebab2ee7ad432bda953ae',
|
||||
'ext': 'mp4',
|
||||
'timestamp': 1394168400,
|
||||
'upload_date': '20080508',
|
||||
},
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
|
||||
jwplatform_url = JWPlatformIE._extract_url(webpage)
|
||||
|
||||
video_title = self._html_search_regex(
|
||||
r'<h1[^>]+class="entry-title"[^>]*>(?P<title>.+?)</h1>',
|
||||
webpage, 'title')
|
||||
video_date = unified_strdate(self._html_search_regex(
|
||||
r'<span[^>]+class="meta-date date updated"[^>]*>(?P<date>.+?)</span>',
|
||||
webpage, 'date', fatal=False))
|
||||
video_description = self._html_search_regex(
|
||||
r'(?s)<div[^>]+class="content-inner"[^>]*>.*?(?P<description><p>.+?)</div>',
|
||||
webpage, 'description', fatal=False)
|
||||
video_thumbnail = self._og_search_thumbnail(webpage)
|
||||
|
||||
return {
|
||||
'_type': 'url_transparent',
|
||||
'display_id': display_id,
|
||||
'title': video_title,
|
||||
'description': video_description,
|
||||
'upload_date': video_date,
|
||||
'thumbnail': video_thumbnail,
|
||||
'url': jwplatform_url,
|
||||
}
|
@ -271,20 +271,22 @@ class TEDIE(InfoExtractor):
|
||||
}
|
||||
|
||||
def _get_subtitles(self, video_id, talk_info):
|
||||
languages = [lang['languageCode'] for lang in talk_info.get('languages', [])]
|
||||
if languages:
|
||||
sub_lang_list = {}
|
||||
for l in languages:
|
||||
sub_lang_list[l] = [
|
||||
{
|
||||
'url': 'http://www.ted.com/talks/subtitles/id/%s/lang/%s/format/%s' % (video_id, l, ext),
|
||||
'ext': ext,
|
||||
}
|
||||
for ext in ['ted', 'srt']
|
||||
]
|
||||
return sub_lang_list
|
||||
else:
|
||||
return {}
|
||||
sub_lang_list = {}
|
||||
for language in try_get(
|
||||
talk_info,
|
||||
(lambda x: x['downloads']['languages'],
|
||||
lambda x: x['languages']), list):
|
||||
lang_code = language.get('languageCode') or language.get('ianaCode')
|
||||
if not lang_code:
|
||||
continue
|
||||
sub_lang_list[lang_code] = [
|
||||
{
|
||||
'url': 'http://www.ted.com/talks/subtitles/id/%s/lang/%s/format/%s' % (video_id, lang_code, ext),
|
||||
'ext': ext,
|
||||
}
|
||||
for ext in ['ted', 'srt']
|
||||
]
|
||||
return sub_lang_list
|
||||
|
||||
def _watch_info(self, url, name):
|
||||
webpage = self._download_webpage(url, name)
|
||||
|
@ -50,7 +50,7 @@ class TwentyMinutenIE(InfoExtractor):
|
||||
@staticmethod
|
||||
def _extract_urls(webpage):
|
||||
return [m.group('url') for m in re.finditer(
|
||||
r'<iframe[^>]+src=(["\'])(?P<url>(?:https?://)?(?:www\.)?20min\.ch/videoplayer/videoplayer.html\?.*?\bvideoId@\d+.*?)\1',
|
||||
r'<iframe[^>]+src=(["\'])(?P<url>(?:(?:https?:)?//)?(?:www\.)?20min\.ch/videoplayer/videoplayer.html\?.*?\bvideoId@\d+.*?)\1',
|
||||
webpage)]
|
||||
|
||||
def _real_extract(self, url):
|
||||
|
@ -7,20 +7,38 @@ from .common import InfoExtractor
|
||||
from ..compat import compat_urlparse
|
||||
from ..utils import (
|
||||
determine_ext,
|
||||
float_or_none,
|
||||
xpath_text,
|
||||
remove_end,
|
||||
int_or_none,
|
||||
dict_get,
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
remove_end,
|
||||
try_get,
|
||||
xpath_text,
|
||||
)
|
||||
|
||||
from .periscope import PeriscopeIE
|
||||
|
||||
|
||||
class TwitterBaseIE(InfoExtractor):
|
||||
def _get_vmap_video_url(self, vmap_url, video_id):
|
||||
def _extract_formats_from_vmap_url(self, vmap_url, video_id):
|
||||
vmap_data = self._download_xml(vmap_url, video_id)
|
||||
return xpath_text(vmap_data, './/MediaFile').strip()
|
||||
video_url = xpath_text(vmap_data, './/MediaFile').strip()
|
||||
if determine_ext(video_url) == 'm3u8':
|
||||
return self._extract_m3u8_formats(
|
||||
video_url, video_id, ext='mp4', m3u8_id='hls',
|
||||
entry_protocol='m3u8_native')
|
||||
return [{
|
||||
'url': video_url,
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
def _search_dimensions_in_video_url(a_format, video_url):
|
||||
m = re.search(r'/(?P<width>\d+)x(?P<height>\d+)/', video_url)
|
||||
if m:
|
||||
a_format.update({
|
||||
'width': int(m.group('width')),
|
||||
'height': int(m.group('height')),
|
||||
})
|
||||
|
||||
|
||||
class TwitterCardIE(TwitterBaseIE):
|
||||
@ -36,7 +54,8 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
'title': 'Twitter Card',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 30.033,
|
||||
}
|
||||
},
|
||||
'skip': 'Video gone',
|
||||
},
|
||||
{
|
||||
'url': 'https://twitter.com/i/cards/tfw/v1/623160978427936768',
|
||||
@ -48,6 +67,7 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
'thumbnail': r're:^https?://.*\.jpg',
|
||||
'duration': 80.155,
|
||||
},
|
||||
'skip': 'Video gone',
|
||||
},
|
||||
{
|
||||
'url': 'https://twitter.com/i/cards/tfw/v1/654001591733886977',
|
||||
@ -65,7 +85,7 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
},
|
||||
{
|
||||
'url': 'https://twitter.com/i/cards/tfw/v1/665289828897005568',
|
||||
'md5': 'ab2745d0b0ce53319a534fccaa986439',
|
||||
'md5': '6dabeaca9e68cbb71c99c322a4b42a11',
|
||||
'info_dict': {
|
||||
'id': 'iBb2x00UVlv',
|
||||
'ext': 'mp4',
|
||||
@ -73,16 +93,17 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
'uploader_id': '1189339351084113920',
|
||||
'uploader': 'ArsenalTerje',
|
||||
'title': 'Vine by ArsenalTerje',
|
||||
'timestamp': 1447451307,
|
||||
},
|
||||
'add_ie': ['Vine'],
|
||||
}, {
|
||||
'url': 'https://twitter.com/i/videos/tweet/705235433198714880',
|
||||
'md5': '3846d0a07109b5ab622425449b59049d',
|
||||
'md5': '884812a2adc8aaf6fe52b15ccbfa3b88',
|
||||
'info_dict': {
|
||||
'id': '705235433198714880',
|
||||
'ext': 'mp4',
|
||||
'title': 'Twitter web player',
|
||||
'thumbnail': r're:^https?://.*\.jpg',
|
||||
'thumbnail': r're:^https?://.*',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://twitter.com/i/videos/752274308186120192',
|
||||
@ -90,6 +111,59 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
},
|
||||
]
|
||||
|
||||
def _parse_media_info(self, media_info, video_id):
|
||||
formats = []
|
||||
for media_variant in media_info.get('variants', []):
|
||||
media_url = media_variant['url']
|
||||
if media_url.endswith('.m3u8'):
|
||||
formats.extend(self._extract_m3u8_formats(media_url, video_id, ext='mp4', m3u8_id='hls'))
|
||||
elif media_url.endswith('.mpd'):
|
||||
formats.extend(self._extract_mpd_formats(media_url, video_id, mpd_id='dash'))
|
||||
else:
|
||||
vbr = int_or_none(dict_get(media_variant, ('bitRate', 'bitrate')), scale=1000)
|
||||
a_format = {
|
||||
'url': media_url,
|
||||
'format_id': 'http-%d' % vbr if vbr else 'http',
|
||||
'vbr': vbr,
|
||||
}
|
||||
# Reported bitRate may be zero
|
||||
if not a_format['vbr']:
|
||||
del a_format['vbr']
|
||||
|
||||
self._search_dimensions_in_video_url(a_format, media_url)
|
||||
|
||||
formats.append(a_format)
|
||||
return formats
|
||||
|
||||
def _extract_mobile_formats(self, username, video_id):
|
||||
webpage = self._download_webpage(
|
||||
'https://mobile.twitter.com/%s/status/%s' % (username, video_id),
|
||||
video_id, 'Downloading mobile webpage',
|
||||
headers={
|
||||
# A recent mobile UA is necessary for `gt` cookie
|
||||
'User-Agent': 'Mozilla/5.0 (Android 6.0.1; Mobile; rv:54.0) Gecko/54.0 Firefox/54.0',
|
||||
})
|
||||
main_script_url = self._html_search_regex(
|
||||
r'<script[^>]+src="([^"]+main\.[^"]+)"', webpage, 'main script URL')
|
||||
main_script = self._download_webpage(
|
||||
main_script_url, video_id, 'Downloading main script')
|
||||
bearer_token = self._search_regex(
|
||||
r'BEARER_TOKEN\s*:\s*"([^"]+)"',
|
||||
main_script, 'bearer token')
|
||||
guest_token = self._search_regex(
|
||||
r'document\.cookie\s*=\s*decodeURIComponent\("gt=(\d+)',
|
||||
webpage, 'guest token')
|
||||
api_data = self._download_json(
|
||||
'https://api.twitter.com/2/timeline/conversation/%s.json' % video_id,
|
||||
video_id, 'Downloading mobile API data',
|
||||
headers={
|
||||
'Authorization': 'Bearer ' + bearer_token,
|
||||
'x-guest-token': guest_token,
|
||||
})
|
||||
media_info = try_get(api_data, lambda o: o['globalObjects']['tweets'][video_id]
|
||||
['extended_entities']['media'][0]['video_info']) or {}
|
||||
return self._parse_media_info(media_info, video_id)
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
@ -117,14 +191,6 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
if periscope_url:
|
||||
return self.url_result(periscope_url, PeriscopeIE.ie_key())
|
||||
|
||||
def _search_dimensions_in_video_url(a_format, video_url):
|
||||
m = re.search(r'/(?P<width>\d+)x(?P<height>\d+)/', video_url)
|
||||
if m:
|
||||
a_format.update({
|
||||
'width': int(m.group('width')),
|
||||
'height': int(m.group('height')),
|
||||
})
|
||||
|
||||
video_url = config.get('video_url') or config.get('playlist', [{}])[0].get('source')
|
||||
|
||||
if video_url:
|
||||
@ -135,15 +201,14 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
'url': video_url,
|
||||
}
|
||||
|
||||
_search_dimensions_in_video_url(f, video_url)
|
||||
self._search_dimensions_in_video_url(f, video_url)
|
||||
|
||||
formats.append(f)
|
||||
|
||||
vmap_url = config.get('vmapUrl') or config.get('vmap_url')
|
||||
if vmap_url:
|
||||
formats.append({
|
||||
'url': self._get_vmap_video_url(vmap_url, video_id),
|
||||
})
|
||||
formats.extend(
|
||||
self._extract_formats_from_vmap_url(vmap_url, video_id))
|
||||
|
||||
media_info = None
|
||||
|
||||
@ -152,29 +217,14 @@ class TwitterCardIE(TwitterBaseIE):
|
||||
media_info = entity['mediaInfo']
|
||||
|
||||
if media_info:
|
||||
for media_variant in media_info['variants']:
|
||||
media_url = media_variant['url']
|
||||
if media_url.endswith('.m3u8'):
|
||||
formats.extend(self._extract_m3u8_formats(media_url, video_id, ext='mp4', m3u8_id='hls'))
|
||||
elif media_url.endswith('.mpd'):
|
||||
formats.extend(self._extract_mpd_formats(media_url, video_id, mpd_id='dash'))
|
||||
else:
|
||||
vbr = int_or_none(media_variant.get('bitRate'), scale=1000)
|
||||
a_format = {
|
||||
'url': media_url,
|
||||
'format_id': 'http-%d' % vbr if vbr else 'http',
|
||||
'vbr': vbr,
|
||||
}
|
||||
# Reported bitRate may be zero
|
||||
if not a_format['vbr']:
|
||||
del a_format['vbr']
|
||||
|
||||
_search_dimensions_in_video_url(a_format, media_url)
|
||||
|
||||
formats.append(a_format)
|
||||
|
||||
formats.extend(self._parse_media_info(media_info, video_id))
|
||||
duration = float_or_none(media_info.get('duration', {}).get('nanos'), scale=1e9)
|
||||
|
||||
username = config.get('user', {}).get('screen_name')
|
||||
if username:
|
||||
formats.extend(self._extract_mobile_formats(username, video_id))
|
||||
|
||||
self._remove_duplicate_formats(formats)
|
||||
self._sort_formats(formats)
|
||||
|
||||
title = self._search_regex(r'<title>([^<]+)</title>', webpage, 'title')
|
||||
@ -255,10 +305,10 @@ class TwitterIE(InfoExtractor):
|
||||
'info_dict': {
|
||||
'id': '700207533655363584',
|
||||
'ext': 'mp4',
|
||||
'title': 'JG - BEAT PROD: @suhmeduh #Damndaniel',
|
||||
'description': 'JG on Twitter: "BEAT PROD: @suhmeduh https://t.co/HBrQ4AfpvZ #Damndaniel https://t.co/byBooq2ejZ"',
|
||||
'title': 'Donte - BEAT PROD: @suhmeduh #Damndaniel',
|
||||
'description': 'Donte on Twitter: "BEAT PROD: @suhmeduh https://t.co/HBrQ4AfpvZ #Damndaniel https://t.co/byBooq2ejZ"',
|
||||
'thumbnail': r're:^https?://.*\.jpg',
|
||||
'uploader': 'JG',
|
||||
'uploader': 'Donte',
|
||||
'uploader_id': 'jaydingeer',
|
||||
},
|
||||
'params': {
|
||||
@ -270,9 +320,11 @@ class TwitterIE(InfoExtractor):
|
||||
'info_dict': {
|
||||
'id': 'MIOxnrUteUd',
|
||||
'ext': 'mp4',
|
||||
'title': 'Dr.Pepperの飲み方 #japanese #バカ #ドクペ #電動ガン',
|
||||
'uploader': 'TAKUMA',
|
||||
'uploader_id': '1004126642786242560',
|
||||
'title': 'FilmDrunk - Vine of the day',
|
||||
'description': 'FilmDrunk on Twitter: "Vine of the day https://t.co/xmTvRdqxWf"',
|
||||
'uploader': 'FilmDrunk',
|
||||
'uploader_id': 'Filmdrunk',
|
||||
'timestamp': 1402826626,
|
||||
'upload_date': '20140615',
|
||||
},
|
||||
'add_ie': ['Vine'],
|
||||
@ -294,13 +346,28 @@ class TwitterIE(InfoExtractor):
|
||||
'info_dict': {
|
||||
'id': '1zqKVVlkqLaKB',
|
||||
'ext': 'mp4',
|
||||
'title': 'Sgt Kerry Schmidt - Ontario Provincial Police - Road rage, mischief, assault, rollover and fire in one occurrence',
|
||||
'title': 'Sgt Kerry Schmidt - LIVE on #Periscope: Road rage, mischief, assault, rollover and fire in one occurrence',
|
||||
'description': 'Sgt Kerry Schmidt on Twitter: "LIVE on #Periscope: Road rage, mischief, assault, rollover and fire in one occurrence https://t.co/EKrVgIXF3s"',
|
||||
'upload_date': '20160923',
|
||||
'uploader_id': 'OPP_HSD',
|
||||
'uploader': 'Sgt Kerry Schmidt - Ontario Provincial Police',
|
||||
'uploader': 'Sgt Kerry Schmidt',
|
||||
'timestamp': 1474613214,
|
||||
},
|
||||
'add_ie': ['Periscope'],
|
||||
}, {
|
||||
# has mp4 formats via mobile API
|
||||
'url': 'https://twitter.com/news_al3alm/status/852138619213144067',
|
||||
'info_dict': {
|
||||
'id': '852138619213144067',
|
||||
'ext': 'mp4',
|
||||
'title': 'عالم الأخبار - كلمة تاريخية بجلسة الجناسي التاريخية.. النائب خالد مؤنس العتيبي للمعارضين : اتقوا الله .. الظلم ظلمات يوم القيامة',
|
||||
'description': 'عالم الأخبار on Twitter: "كلمة تاريخية بجلسة الجناسي التاريخية.. النائب خالد مؤنس العتيبي للمعارضين : اتقوا الله .. الظلم ظلمات يوم القيامة https://t.co/xg6OhpyKfN"',
|
||||
'uploader': 'عالم الأخبار',
|
||||
'uploader_id': 'news_al3alm',
|
||||
},
|
||||
'params': {
|
||||
'format': 'best[format_id^=http-]',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
@ -393,7 +460,7 @@ class TwitterAmplifyIE(TwitterBaseIE):
|
||||
|
||||
vmap_url = self._html_search_meta(
|
||||
'twitter:amplify:vmap', webpage, 'vmap url')
|
||||
video_url = self._get_vmap_video_url(vmap_url, video_id)
|
||||
formats = self._extract_formats_from_vmap_url(vmap_url, video_id)
|
||||
|
||||
thumbnails = []
|
||||
thumbnail = self._html_search_meta(
|
||||
@ -415,11 +482,10 @@ class TwitterAmplifyIE(TwitterBaseIE):
|
||||
})
|
||||
|
||||
video_w, video_h = _find_dimension('player')
|
||||
formats = [{
|
||||
'url': video_url,
|
||||
formats[0].update({
|
||||
'width': video_w,
|
||||
'height': video_h,
|
||||
}]
|
||||
})
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
|
@ -15,6 +15,7 @@ from ..utils import (
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
js_to_json,
|
||||
sanitized_Request,
|
||||
unescapeHTML,
|
||||
urlencode_postdata,
|
||||
@ -73,7 +74,7 @@ class UdemyIE(InfoExtractor):
|
||||
return compat_urlparse.urljoin(base_url, url) if not url.startswith('http') else url
|
||||
|
||||
checkout_url = unescapeHTML(self._search_regex(
|
||||
r'href=(["\'])(?P<url>(?:https?://(?:www\.)?udemy\.com)?/payment/checkout/.+?)\1',
|
||||
r'href=(["\'])(?P<url>(?:https?://(?:www\.)?udemy\.com)?/(?:payment|cart)/checkout/.+?)\1',
|
||||
webpage, 'checkout url', group='url', default=None))
|
||||
if checkout_url:
|
||||
raise ExtractorError(
|
||||
@ -268,6 +269,25 @@ class UdemyIE(InfoExtractor):
|
||||
f = add_output_format_meta(f, format_id)
|
||||
formats.append(f)
|
||||
|
||||
def extract_subtitles(track_list):
|
||||
if not isinstance(track_list, list):
|
||||
return
|
||||
for track in track_list:
|
||||
if not isinstance(track, dict):
|
||||
continue
|
||||
if track.get('kind') != 'captions':
|
||||
continue
|
||||
src = track.get('src')
|
||||
if not src or not isinstance(src, compat_str):
|
||||
continue
|
||||
lang = track.get('language') or track.get(
|
||||
'srclang') or track.get('label')
|
||||
sub_dict = automatic_captions if track.get(
|
||||
'autogenerated') is True else subtitles
|
||||
sub_dict.setdefault(lang, []).append({
|
||||
'url': src,
|
||||
})
|
||||
|
||||
download_urls = asset.get('download_urls')
|
||||
if isinstance(download_urls, dict):
|
||||
extract_formats(download_urls.get('Video'))
|
||||
@ -315,23 +335,16 @@ class UdemyIE(InfoExtractor):
|
||||
extract_formats(data.get('sources'))
|
||||
if not duration:
|
||||
duration = int_or_none(data.get('duration'))
|
||||
tracks = data.get('tracks')
|
||||
if isinstance(tracks, list):
|
||||
for track in tracks:
|
||||
if not isinstance(track, dict):
|
||||
continue
|
||||
if track.get('kind') != 'captions':
|
||||
continue
|
||||
src = track.get('src')
|
||||
if not src or not isinstance(src, compat_str):
|
||||
continue
|
||||
lang = track.get('language') or track.get(
|
||||
'srclang') or track.get('label')
|
||||
sub_dict = automatic_captions if track.get(
|
||||
'autogenerated') is True else subtitles
|
||||
sub_dict.setdefault(lang, []).append({
|
||||
'url': src,
|
||||
})
|
||||
extract_subtitles(data.get('tracks'))
|
||||
|
||||
if not subtitles and not automatic_captions:
|
||||
text_tracks = self._parse_json(
|
||||
self._search_regex(
|
||||
r'text-tracks=(["\'])(?P<data>\[.+?\])\1', view_html,
|
||||
'text tracks', default='{}', group='data'), video_id,
|
||||
transform_source=lambda s: js_to_json(unescapeHTML(s)),
|
||||
fatal=False)
|
||||
extract_subtitles(text_tracks)
|
||||
|
||||
self._sort_formats(formats, field_preference=('height', 'width', 'tbr', 'format_id'))
|
||||
|
||||
|
@ -121,7 +121,11 @@ class VH1IE(MTVIE):
|
||||
idoc = self._download_xml(
|
||||
doc_url, video_id,
|
||||
'Downloading info', transform_source=fix_xml_ampersands)
|
||||
return self.playlist_result(
|
||||
[self._get_video_info(item) for item in idoc.findall('.//item')],
|
||||
playlist_id=video_id,
|
||||
)
|
||||
|
||||
entries = []
|
||||
for item in idoc.findall('.//item'):
|
||||
info = self._get_video_info(item)
|
||||
if info:
|
||||
entries.append(info)
|
||||
|
||||
return self.playlist_result(entries, playlist_id=video_id)
|
||||
|
@ -56,7 +56,8 @@ class VidioIE(InfoExtractor):
|
||||
self._sort_formats(formats)
|
||||
|
||||
duration = int_or_none(duration or self._search_regex(
|
||||
r'data-video-duration=(["\'])(?P<duartion>\d+)\1', webpage, 'duration'))
|
||||
r'data-video-duration=(["\'])(?P<duration>\d+)\1', webpage,
|
||||
'duration', fatal=False, group='duration'))
|
||||
thumbnail = thumbnail or self._og_search_thumbnail(webpage)
|
||||
|
||||
like_count = int_or_none(self._search_regex(
|
||||
|
@ -3,7 +3,10 @@ from __future__ import unicode_literals
|
||||
import itertools
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_HTTPError
|
||||
from ..compat import (
|
||||
compat_HTTPError,
|
||||
compat_str,
|
||||
)
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
@ -161,13 +164,28 @@ class VidmeIE(InfoExtractor):
|
||||
'or for violating the terms of use.',
|
||||
expected=True)
|
||||
|
||||
formats = [{
|
||||
'format_id': f.get('type'),
|
||||
'url': f['uri'],
|
||||
'width': int_or_none(f.get('width')),
|
||||
'height': int_or_none(f.get('height')),
|
||||
'preference': 0 if f.get('type', '').endswith('clip') else 1,
|
||||
} for f in video.get('formats', []) if f.get('uri')]
|
||||
formats = []
|
||||
for f in video.get('formats', []):
|
||||
format_url = f.get('uri')
|
||||
if not format_url or not isinstance(format_url, compat_str):
|
||||
continue
|
||||
format_type = f.get('type')
|
||||
if format_type == 'dash':
|
||||
formats.extend(self._extract_mpd_formats(
|
||||
format_url, video_id, mpd_id='dash', fatal=False))
|
||||
elif format_type == 'hls':
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
format_url, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||
m3u8_id='hls', fatal=False))
|
||||
else:
|
||||
formats.append({
|
||||
'format_id': f.get('type'),
|
||||
'url': format_url,
|
||||
'width': int_or_none(f.get('width')),
|
||||
'height': int_or_none(f.get('height')),
|
||||
'preference': 0 if f.get('type', '').endswith(
|
||||
'clip') else 1,
|
||||
})
|
||||
|
||||
if not formats and video.get('complete_url'):
|
||||
formats.append({
|
||||
|
@ -92,10 +92,12 @@ class VineIE(InfoExtractor):
|
||||
|
||||
username = data.get('username')
|
||||
|
||||
alt_title = 'Vine by %s' % username if username else None
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': data.get('description'),
|
||||
'alt_title': 'Vine by %s' % username if username else None,
|
||||
'title': data.get('description') or alt_title or 'Vine video',
|
||||
'alt_title': alt_title,
|
||||
'thumbnail': data.get('thumbnailUrl'),
|
||||
'timestamp': unified_timestamp(data.get('created')),
|
||||
'uploader': username,
|
||||
|
@ -236,7 +236,12 @@ class VLiveChannelIE(InfoExtractor):
|
||||
query={
|
||||
'app_id': app_id,
|
||||
'channelSeq': channel_seq,
|
||||
'maxNumOfRows': 1000,
|
||||
# Large values of maxNumOfRows (~300 or above) may cause
|
||||
# empty responses (see [1]), e.g. this happens for [2] that
|
||||
# has more than 300 videos.
|
||||
# 1. https://github.com/rg3/youtube-dl/issues/13830
|
||||
# 2. http://channels.vlive.tv/EDBF.
|
||||
'maxNumOfRows': 100,
|
||||
'_': int(time.time()),
|
||||
'pageNo': page_num
|
||||
}
|
||||
|
98
youtube_dl/extractor/voot.py
Normal file
98
youtube_dl/extractor/voot.py
Normal file
@ -0,0 +1,98 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from .kaltura import KalturaIE
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
try_get,
|
||||
unified_timestamp,
|
||||
)
|
||||
|
||||
|
||||
class VootIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?voot\.com/(?:[^/]+/)+(?P<id>\d+)'
|
||||
_GEO_COUNTRIES = ['IN']
|
||||
_TESTS = [{
|
||||
'url': 'https://www.voot.com/shows/ishq-ka-rang-safed/1/360558/is-this-the-end-of-kamini-/441353',
|
||||
'info_dict': {
|
||||
'id': '0_8ledb18o',
|
||||
'ext': 'mp4',
|
||||
'title': 'Ishq Ka Rang Safed - Season 01 - Episode 340',
|
||||
'description': 'md5:06291fbbbc4dcbe21235c40c262507c1',
|
||||
'uploader_id': 'batchUser',
|
||||
'timestamp': 1472162937,
|
||||
'upload_date': '20160825',
|
||||
'duration': 1146,
|
||||
'series': 'Ishq Ka Rang Safed',
|
||||
'season_number': 1,
|
||||
'episode': 'Is this the end of Kamini?',
|
||||
'episode_number': 340,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
'expected_warnings': ['Failed to download m3u8 information'],
|
||||
}, {
|
||||
'url': 'https://www.voot.com/kids/characters/mighty-cat-masked-niyander-e-/400478/school-bag-disappears/440925',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.voot.com/movies/pandavas-5/424627',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
media_info = self._download_json(
|
||||
'https://wapi.voot.com/ws/ott/getMediaInfo.json', video_id,
|
||||
query={
|
||||
'platform': 'Web',
|
||||
'pId': 2,
|
||||
'mediaId': video_id,
|
||||
})
|
||||
|
||||
status_code = try_get(media_info, lambda x: x['status']['code'], int)
|
||||
if status_code != 0:
|
||||
raise ExtractorError(media_info['status']['message'], expected=True)
|
||||
|
||||
media = media_info['assets']
|
||||
|
||||
entry_id = media['EntryId']
|
||||
title = media['MediaName']
|
||||
|
||||
description, series, season_number, episode, episode_number = [None] * 5
|
||||
|
||||
for meta in try_get(media, lambda x: x['Metas'], list) or []:
|
||||
key, value = meta.get('Key'), meta.get('Value')
|
||||
if not key or not value:
|
||||
continue
|
||||
if key == 'ContentSynopsis':
|
||||
description = value
|
||||
elif key == 'RefSeriesTitle':
|
||||
series = value
|
||||
elif key == 'RefSeriesSeason':
|
||||
season_number = int_or_none(value)
|
||||
elif key == 'EpisodeMainTitle':
|
||||
episode = value
|
||||
elif key == 'EpisodeNo':
|
||||
episode_number = int_or_none(value)
|
||||
|
||||
return {
|
||||
'_type': 'url_transparent',
|
||||
'url': 'kaltura:1982551:%s' % entry_id,
|
||||
'ie_key': KalturaIE.ie_key(),
|
||||
'title': title,
|
||||
'description': description,
|
||||
'series': series,
|
||||
'season_number': season_number,
|
||||
'episode': episode,
|
||||
'episode_number': episode_number,
|
||||
'timestamp': unified_timestamp(media.get('CreationDate')),
|
||||
'duration': int_or_none(media.get('Duration')),
|
||||
'view_count': int_or_none(media.get('ViewCounter')),
|
||||
'like_count': int_or_none(media.get('like_counter')),
|
||||
}
|
@ -1,6 +1,8 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
int_or_none,
|
||||
@ -28,6 +30,12 @@ class VzaarIE(InfoExtractor):
|
||||
},
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
def _extract_urls(webpage):
|
||||
return re.findall(
|
||||
r'<iframe[^>]+src=["\']((?:https?:)?//(?:view\.vzaar\.com)/[0-9]+)',
|
||||
webpage)
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
video_data = self._download_json(
|
||||
|
151
youtube_dl/extractor/watchbox.py
Normal file
151
youtube_dl/extractor/watchbox.py
Normal file
@ -0,0 +1,151 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
from ..utils import (
|
||||
int_or_none,
|
||||
js_to_json,
|
||||
strip_or_none,
|
||||
try_get,
|
||||
unified_timestamp,
|
||||
)
|
||||
|
||||
|
||||
class WatchBoxIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?watchbox\.de/(?P<kind>serien|filme)/(?:[^/]+/)*[^/]+-(?P<id>\d+)'
|
||||
_TESTS = [{
|
||||
# film
|
||||
'url': 'https://www.watchbox.de/filme/free-jimmy-12325.html',
|
||||
'info_dict': {
|
||||
'id': '341368',
|
||||
'ext': 'mp4',
|
||||
'title': 'Free Jimmy',
|
||||
'description': 'md5:bcd8bafbbf9dc0ef98063d344d7cc5f6',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 4890,
|
||||
'age_limit': 16,
|
||||
'release_year': 2009,
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
'skip_download': True,
|
||||
},
|
||||
'expected_warnings': ['Failed to download m3u8 information'],
|
||||
}, {
|
||||
# episode
|
||||
'url': 'https://www.watchbox.de/serien/ugly-americans-12231/staffel-1/date-in-der-hoelle-328286.html',
|
||||
'info_dict': {
|
||||
'id': '328286',
|
||||
'ext': 'mp4',
|
||||
'title': 'S01 E01 - Date in der Hölle',
|
||||
'description': 'md5:2f31c74a8186899f33cb5114491dae2b',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 1291,
|
||||
'age_limit': 12,
|
||||
'release_year': 2010,
|
||||
'series': 'Ugly Americans',
|
||||
'season_number': 1,
|
||||
'episode': 'Date in der Hölle',
|
||||
'episode_number': 1,
|
||||
},
|
||||
'params': {
|
||||
'format': 'bestvideo',
|
||||
'skip_download': True,
|
||||
},
|
||||
'expected_warnings': ['Failed to download m3u8 information'],
|
||||
}, {
|
||||
'url': 'https://www.watchbox.de/serien/ugly-americans-12231/staffel-2/der-ring-des-powers-328270',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
kind, video_id = mobj.group('kind', 'id')
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
source = self._parse_json(
|
||||
self._search_regex(
|
||||
r'(?s)source\s*:\s*({.+?})\s*,\s*\n', webpage, 'source',
|
||||
default='{}'),
|
||||
video_id, transform_source=js_to_json, fatal=False) or {}
|
||||
|
||||
video_id = compat_str(source.get('videoId') or video_id)
|
||||
|
||||
devapi = self._download_json(
|
||||
'http://api.watchbox.de/devapi/id/%s' % video_id, video_id, query={
|
||||
'format': 'json',
|
||||
'apikey': 'hbbtv',
|
||||
}, fatal=False)
|
||||
|
||||
item = try_get(devapi, lambda x: x['items'][0], dict) or {}
|
||||
|
||||
title = item.get('title') or try_get(
|
||||
item, lambda x: x['movie']['headline_movie'],
|
||||
compat_str) or source['title']
|
||||
|
||||
formats = []
|
||||
hls_url = item.get('media_videourl_hls') or source.get('hls')
|
||||
if hls_url:
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
hls_url, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||
m3u8_id='hls', fatal=False))
|
||||
dash_url = item.get('media_videourl_wv') or source.get('dash')
|
||||
if dash_url:
|
||||
formats.extend(self._extract_mpd_formats(
|
||||
dash_url, video_id, mpd_id='dash', fatal=False))
|
||||
mp4_url = item.get('media_videourl')
|
||||
if mp4_url:
|
||||
formats.append({
|
||||
'url': mp4_url,
|
||||
'format_id': 'mp4',
|
||||
'width': int_or_none(item.get('width')),
|
||||
'height': int_or_none(item.get('height')),
|
||||
'tbr': int_or_none(item.get('bitrate')),
|
||||
})
|
||||
self._sort_formats(formats)
|
||||
|
||||
description = strip_or_none(item.get('descr'))
|
||||
thumbnail = item.get('media_content_thumbnail_large') or source.get('poster') or item.get('media_thumbnail')
|
||||
duration = int_or_none(item.get('media_length') or source.get('length'))
|
||||
timestamp = unified_timestamp(item.get('pubDate'))
|
||||
view_count = int_or_none(item.get('media_views'))
|
||||
age_limit = int_or_none(try_get(item, lambda x: x['movie']['fsk']))
|
||||
release_year = int_or_none(try_get(item, lambda x: x['movie']['rel_year']))
|
||||
|
||||
info = {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'description': description,
|
||||
'thumbnail': thumbnail,
|
||||
'duration': duration,
|
||||
'timestamp': timestamp,
|
||||
'view_count': view_count,
|
||||
'age_limit': age_limit,
|
||||
'release_year': release_year,
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
if kind.lower() == 'serien':
|
||||
series = try_get(
|
||||
item, lambda x: x['special']['title'],
|
||||
compat_str) or source.get('format')
|
||||
season_number = int_or_none(self._search_regex(
|
||||
r'^S(\d{1,2})\s*E\d{1,2}', title, 'season number',
|
||||
default=None) or self._search_regex(
|
||||
r'/staffel-(\d+)/', url, 'season number', default=None))
|
||||
episode = source.get('title')
|
||||
episode_number = int_or_none(self._search_regex(
|
||||
r'^S\d{1,2}\s*E(\d{1,2})', title, 'episode number',
|
||||
default=None))
|
||||
info.update({
|
||||
'series': series,
|
||||
'season_number': season_number,
|
||||
'episode': episode,
|
||||
'episode_number': episode_number,
|
||||
})
|
||||
|
||||
return info
|
@ -39,8 +39,8 @@ class XXXYMoviesIE(InfoExtractor):
|
||||
r"video_url\s*:\s*'([^']+)'", webpage, 'video URL')
|
||||
|
||||
title = self._html_search_regex(
|
||||
[r'<div class="block_header">\s*<h1>([^<]+)</h1>',
|
||||
r'<title>(.*?)\s*-\s*XXXYMovies\.com</title>'],
|
||||
[r'<div[^>]+\bclass="block_header"[^>]*>\s*<h1>([^<]+)<',
|
||||
r'<title>(.*?)\s*-\s*(?:XXXYMovies\.com|XXX\s+Movies)</title>'],
|
||||
webpage, 'title')
|
||||
|
||||
thumbnail = self._search_regex(
|
||||
|
118
youtube_dl/extractor/yandexdisk.py
Normal file
118
youtube_dl/extractor/yandexdisk.py
Normal file
@ -0,0 +1,118 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
from ..utils import (
|
||||
determine_ext,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
try_get,
|
||||
urlencode_postdata,
|
||||
)
|
||||
|
||||
|
||||
class YandexDiskIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://yadi\.sk/[di]/(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://yadi.sk/i/VdOeDou8eZs6Y',
|
||||
'md5': '33955d7ae052f15853dc41f35f17581c',
|
||||
'info_dict': {
|
||||
'id': 'VdOeDou8eZs6Y',
|
||||
'ext': 'mp4',
|
||||
'title': '4.mp4',
|
||||
'duration': 168.6,
|
||||
'uploader': 'y.botova',
|
||||
'uploader_id': '300043621',
|
||||
'view_count': int,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://yadi.sk/d/h3WAXvDS3Li3Ce',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
status = self._download_webpage(
|
||||
'https://disk.yandex.com/auth/status', video_id, query={
|
||||
'urlOrigin': url,
|
||||
'source': 'public',
|
||||
'md5': 'false',
|
||||
})
|
||||
|
||||
sk = self._search_regex(
|
||||
r'(["\'])sk(?:External)?\1\s*:\s*(["\'])(?P<value>(?:(?!\2).)+)\2',
|
||||
status, 'sk', group='value')
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
models = self._parse_json(
|
||||
self._search_regex(
|
||||
r'<script[^>]+id=["\']models-client[^>]+>\s*(\[.+?\])\s*</script',
|
||||
webpage, 'video JSON'),
|
||||
video_id)
|
||||
|
||||
data = next(
|
||||
model['data'] for model in models
|
||||
if model.get('model') == 'resource')
|
||||
|
||||
video_hash = data['id']
|
||||
title = data['name']
|
||||
|
||||
models = self._download_json(
|
||||
'https://disk.yandex.com/models/', video_id,
|
||||
data=urlencode_postdata({
|
||||
'_model.0': 'videoInfo',
|
||||
'id.0': video_hash,
|
||||
'_model.1': 'do-get-resource-url',
|
||||
'id.1': video_hash,
|
||||
'version': '13.6',
|
||||
'sk': sk,
|
||||
}), query={'_m': 'videoInfo'})['models']
|
||||
|
||||
videos = try_get(models, lambda x: x[0]['data']['videos'], list) or []
|
||||
source_url = try_get(
|
||||
models, lambda x: x[1]['data']['file'], compat_str)
|
||||
|
||||
formats = []
|
||||
if source_url:
|
||||
formats.append({
|
||||
'url': source_url,
|
||||
'format_id': 'source',
|
||||
'ext': determine_ext(title, 'mp4'),
|
||||
'quality': 1,
|
||||
})
|
||||
for video in videos:
|
||||
format_url = video.get('url')
|
||||
if not format_url:
|
||||
continue
|
||||
if determine_ext(format_url) == 'm3u8':
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
format_url, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||
m3u8_id='hls', fatal=False))
|
||||
else:
|
||||
formats.append({
|
||||
'url': format_url,
|
||||
})
|
||||
self._sort_formats(formats)
|
||||
|
||||
duration = float_or_none(try_get(
|
||||
models, lambda x: x[0]['data']['duration']), 1000)
|
||||
uploader = try_get(
|
||||
data, lambda x: x['user']['display_name'], compat_str)
|
||||
uploader_id = try_get(
|
||||
data, lambda x: x['user']['uid'], compat_str)
|
||||
view_count = int_or_none(try_get(
|
||||
data, lambda x: x['meta']['views_counter']))
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'duration': duration,
|
||||
'uploader': uploader,
|
||||
'uploader_id': uploader_id,
|
||||
'view_count': view_count,
|
||||
'formats': formats,
|
||||
}
|
@ -1,39 +1,95 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
from ..utils import (
|
||||
determine_ext,
|
||||
int_or_none,
|
||||
parse_duration,
|
||||
)
|
||||
|
||||
|
||||
class YouJizzIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:\w+\.)?youjizz\.com/videos/(?:[^/#?]+)?-(?P<id>[0-9]+)\.html(?:$|[?#])'
|
||||
_VALID_URL = r'https?://(?:\w+\.)?youjizz\.com/videos/(?:[^/#?]*-(?P<id>\d+)\.html|embed/(?P<embed_id>\d+))'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.youjizz.com/videos/zeichentrick-1-2189178.html',
|
||||
'md5': '78fc1901148284c69af12640e01c6310',
|
||||
'md5': 'b1e1dfaa8bb9537d8b84eeda9cf4acf4',
|
||||
'info_dict': {
|
||||
'id': '2189178',
|
||||
'ext': 'mp4',
|
||||
'title': 'Zeichentrick 1',
|
||||
'age_limit': 18,
|
||||
'duration': 2874,
|
||||
}
|
||||
}, {
|
||||
'url': 'http://www.youjizz.com/videos/-2189178.html',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.youjizz.com/videos/embed/31991001',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
# YouJizz's HTML5 player has invalid HTML
|
||||
webpage = webpage.replace('"controls', '" controls')
|
||||
age_limit = self._rta_search(webpage)
|
||||
video_title = self._html_search_regex(
|
||||
r'<title>\s*(.*)\s*</title>', webpage, 'title')
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
video_id = mobj.group('id') or mobj.group('embed_id')
|
||||
|
||||
info_dict = self._parse_html5_media_entries(url, webpage, video_id)[0]
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
title = self._html_search_regex(
|
||||
r'<title>(.+?)</title>', webpage, 'title')
|
||||
|
||||
formats = []
|
||||
|
||||
encodings = self._parse_json(
|
||||
self._search_regex(
|
||||
r'encodings\s*=\s*(\[.+?\]);\n', webpage, 'encodings',
|
||||
default='[]'),
|
||||
video_id, fatal=False)
|
||||
for encoding in encodings:
|
||||
if not isinstance(encoding, dict):
|
||||
continue
|
||||
format_url = encoding.get('filename')
|
||||
if not isinstance(format_url, compat_str):
|
||||
continue
|
||||
if determine_ext(format_url) == 'm3u8':
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
format_url, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||
m3u8_id='hls', fatal=False))
|
||||
else:
|
||||
format_id = encoding.get('name') or encoding.get('quality')
|
||||
height = int_or_none(self._search_regex(
|
||||
r'^(\d+)[pP]', format_id, 'height', default=None))
|
||||
formats.append({
|
||||
'url': format_url,
|
||||
'format_id': format_id,
|
||||
'height': height,
|
||||
})
|
||||
|
||||
if formats:
|
||||
info_dict = {
|
||||
'formats': formats,
|
||||
}
|
||||
else:
|
||||
# YouJizz's HTML5 player has invalid HTML
|
||||
webpage = webpage.replace('"controls', '" controls')
|
||||
info_dict = self._parse_html5_media_entries(
|
||||
url, webpage, video_id)[0]
|
||||
|
||||
duration = parse_duration(self._search_regex(
|
||||
r'<strong>Runtime:</strong>([^<]+)', webpage, 'duration',
|
||||
default=None))
|
||||
uploader = self._search_regex(
|
||||
r'<strong>Uploaded By:.*?<a[^>]*>([^<]+)', webpage, 'uploader',
|
||||
default=None)
|
||||
|
||||
info_dict.update({
|
||||
'id': video_id,
|
||||
'title': video_title,
|
||||
'age_limit': age_limit,
|
||||
'title': title,
|
||||
'age_limit': self._rta_search(webpage),
|
||||
'duration': duration,
|
||||
'uploader': uploader,
|
||||
})
|
||||
|
||||
return info_dict
|
||||
|
@ -1,7 +1,6 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import itertools
|
||||
import random
|
||||
import re
|
||||
import string
|
||||
@ -14,7 +13,6 @@ from ..utils import (
|
||||
js_to_json,
|
||||
str_or_none,
|
||||
strip_jsonp,
|
||||
urljoin,
|
||||
)
|
||||
|
||||
|
||||
@ -222,17 +220,42 @@ class YoukuShowIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://list\.youku\.com/show/id_(?P<id>[0-9a-z]+)\.html'
|
||||
IE_NAME = 'youku:show'
|
||||
|
||||
_TEST = {
|
||||
_TESTS = [{
|
||||
'url': 'http://list.youku.com/show/id_zc7c670be07ff11e48b3f.html',
|
||||
'info_dict': {
|
||||
'id': 'zc7c670be07ff11e48b3f',
|
||||
'title': '花千骨 未删减版',
|
||||
'title': '花千骨 DVD版',
|
||||
'description': 'md5:a1ae6f5618571bbeb5c9821f9c81b558',
|
||||
},
|
||||
'playlist_count': 50,
|
||||
}
|
||||
}, {
|
||||
# Episode number not starting from 1
|
||||
'url': 'http://list.youku.com/show/id_zefbfbd70efbfbd780bef.html',
|
||||
'info_dict': {
|
||||
'id': 'zefbfbd70efbfbd780bef',
|
||||
'title': '超级飞侠3',
|
||||
'description': 'md5:275715156abebe5ccc2a1992e9d56b98',
|
||||
},
|
||||
'playlist_count': 24,
|
||||
}, {
|
||||
# Ongoing playlist. The initial page is the last one
|
||||
'url': 'http://list.youku.com/show/id_za7c275ecd7b411e1a19e.html',
|
||||
'only_matchine': True,
|
||||
}]
|
||||
|
||||
_PAGE_SIZE = 40
|
||||
def _extract_entries(self, playlist_data_url, show_id, note, query):
|
||||
query['callback'] = 'cb'
|
||||
playlist_data = self._download_json(
|
||||
playlist_data_url, show_id, query=query, note=note,
|
||||
transform_source=lambda s: js_to_json(strip_jsonp(s)))['html']
|
||||
drama_list = (get_element_by_class('p-drama-grid', playlist_data) or
|
||||
get_element_by_class('p-drama-half-row', playlist_data))
|
||||
if drama_list is None:
|
||||
raise ExtractorError('No episodes found')
|
||||
video_urls = re.findall(r'<a[^>]+href="([^"]+)"', drama_list)
|
||||
return playlist_data, [
|
||||
self.url_result(self._proto_relative_url(video_url, 'http:'), YoukuIE.ie_key())
|
||||
for video_url in video_urls]
|
||||
|
||||
def _real_extract(self, url):
|
||||
show_id = self._match_id(url)
|
||||
@ -242,30 +265,29 @@ class YoukuShowIE(InfoExtractor):
|
||||
page_config = self._parse_json(self._search_regex(
|
||||
r'var\s+PageConfig\s*=\s*({.+});', webpage, 'page config'),
|
||||
show_id, transform_source=js_to_json)
|
||||
for idx in itertools.count(0):
|
||||
if idx == 0:
|
||||
playlist_data_url = 'http://list.youku.com/show/module'
|
||||
query = {'id': page_config['showid'], 'tab': 'point'}
|
||||
else:
|
||||
playlist_data_url = 'http://list.youku.com/show/point'
|
||||
query = {
|
||||
'id': page_config['showid'],
|
||||
'stage': 'reload_%d' % (self._PAGE_SIZE * idx + 1),
|
||||
}
|
||||
query['callback'] = 'cb'
|
||||
playlist_data = self._download_json(
|
||||
playlist_data_url, show_id, query=query,
|
||||
first_page, initial_entries = self._extract_entries(
|
||||
'http://list.youku.com/show/module', show_id,
|
||||
note='Downloading initial playlist data page',
|
||||
query={
|
||||
'id': page_config['showid'],
|
||||
'tab': 'showInfo',
|
||||
})
|
||||
first_page_reload_id = self._html_search_regex(
|
||||
r'<div[^>]+id="(reload_\d+)', first_page, 'first page reload id')
|
||||
# The first reload_id has the same items as first_page
|
||||
reload_ids = re.findall('<li[^>]+data-id="([^"]+)">', first_page)
|
||||
for idx, reload_id in enumerate(reload_ids):
|
||||
if reload_id == first_page_reload_id:
|
||||
entries.extend(initial_entries)
|
||||
continue
|
||||
_, new_entries = self._extract_entries(
|
||||
'http://list.youku.com/show/episode', show_id,
|
||||
note='Downloading playlist data page %d' % (idx + 1),
|
||||
transform_source=lambda s: js_to_json(strip_jsonp(s)))['html']
|
||||
video_urls = re.findall(
|
||||
r'<div[^>]+class="p-thumb"[^<]+<a[^>]+href="([^"]+)"',
|
||||
playlist_data)
|
||||
new_entries = [
|
||||
self.url_result(urljoin(url, video_url), YoukuIE.ie_key())
|
||||
for video_url in video_urls]
|
||||
query={
|
||||
'id': page_config['showid'],
|
||||
'stage': reload_id,
|
||||
})
|
||||
entries.extend(new_entries)
|
||||
if len(new_entries) < self._PAGE_SIZE:
|
||||
break
|
||||
|
||||
desc = self._html_search_meta('description', webpage, fatal=False)
|
||||
playlist_title = desc.split(',')[0] if desc else None
|
||||
|
@ -673,6 +673,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
||||
},
|
||||
},
|
||||
# video_info is None (https://github.com/rg3/youtube-dl/issues/4421)
|
||||
# YouTube Red ad is not captured for creator
|
||||
{
|
||||
'url': '__2ABJjxzNo',
|
||||
'info_dict': {
|
||||
@ -1649,7 +1650,21 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
||||
video_webpage, 'license', default=None)
|
||||
|
||||
m_music = re.search(
|
||||
r'<h4[^>]+class="title"[^>]*>\s*Music\s*</h4>\s*<ul[^>]*>\s*<li>(?P<title>.+?) by (?P<creator>.+?)(?:\(.+?\))?</li',
|
||||
r'''(?x)
|
||||
<h4[^>]+class="title"[^>]*>\s*Music\s*</h4>\s*
|
||||
<ul[^>]*>\s*
|
||||
<li>(?P<title>.+?)
|
||||
by (?P<creator>.+?)
|
||||
(?:
|
||||
\(.+?\)|
|
||||
<a[^>]*
|
||||
(?:
|
||||
\bhref=["\']/red[^>]*>| # drop possible
|
||||
>\s*Listen ad-free with YouTube Red # YouTube Red ad
|
||||
)
|
||||
.*?
|
||||
)?</li
|
||||
''',
|
||||
video_webpage)
|
||||
if m_music:
|
||||
video_alt_title = remove_quotes(unescapeHTML(m_music.group('title')))
|
||||
|
@ -20,6 +20,24 @@ from .utils import (
|
||||
from .version import __version__
|
||||
|
||||
|
||||
def _hide_login_info(opts):
|
||||
PRIVATE_OPTS = set(['-p', '--password', '-u', '--username', '--video-password', '--ap-password', '--ap-username'])
|
||||
eqre = re.compile('^(?P<key>' + ('|'.join(re.escape(po) for po in PRIVATE_OPTS)) + ')=.+$')
|
||||
|
||||
def _scrub_eq(o):
|
||||
m = eqre.match(o)
|
||||
if m:
|
||||
return m.group('key') + '=PRIVATE'
|
||||
else:
|
||||
return o
|
||||
|
||||
opts = list(map(_scrub_eq, opts))
|
||||
for idx, opt in enumerate(opts):
|
||||
if opt in PRIVATE_OPTS and idx + 1 < len(opts):
|
||||
opts[idx + 1] = 'PRIVATE'
|
||||
return opts
|
||||
|
||||
|
||||
def parseOpts(overrideArguments=None):
|
||||
def _readOptions(filename_bytes, default=[]):
|
||||
try:
|
||||
@ -93,26 +111,6 @@ def parseOpts(overrideArguments=None):
|
||||
def _comma_separated_values_options_callback(option, opt_str, value, parser):
|
||||
setattr(parser.values, option.dest, value.split(','))
|
||||
|
||||
def _hide_login_info(opts):
|
||||
PRIVATE_OPTS = ['-p', '--password', '-u', '--username', '--video-password', '--ap-password', '--ap-username']
|
||||
eqre = re.compile('^(?P<key>' + ('|'.join(re.escape(po) for po in PRIVATE_OPTS)) + ')=.+$')
|
||||
|
||||
def _scrub_eq(o):
|
||||
m = eqre.match(o)
|
||||
if m:
|
||||
return m.group('key') + '=PRIVATE'
|
||||
else:
|
||||
return o
|
||||
|
||||
opts = list(map(_scrub_eq, opts))
|
||||
for private_opt in PRIVATE_OPTS:
|
||||
try:
|
||||
i = opts.index(private_opt)
|
||||
opts[i + 1] = 'PRIVATE'
|
||||
except ValueError:
|
||||
pass
|
||||
return opts
|
||||
|
||||
# No need to wrap help messages if we're on a wide console
|
||||
columns = compat_get_terminal_size().columns
|
||||
max_width = columns if columns else 80
|
||||
|
@ -596,7 +596,7 @@ def unescapeHTML(s):
|
||||
assert type(s) == compat_str
|
||||
|
||||
return re.sub(
|
||||
r'&([^;]+;)', lambda m: _htmlentity_transform(m.group(1)), s)
|
||||
r'&([^&;]+;)', lambda m: _htmlentity_transform(m.group(1)), s)
|
||||
|
||||
|
||||
def get_subprocess_encoding():
|
||||
@ -2733,6 +2733,8 @@ def cli_option(params, command_option, param):
|
||||
|
||||
def cli_bool_option(params, command_option, param, true_value='true', false_value='false', separator=None):
|
||||
param = params.get(param)
|
||||
if param is None:
|
||||
return []
|
||||
assert isinstance(param, bool)
|
||||
if separator:
|
||||
return [command_option + separator + (true_value if param else false_value)]
|
||||
|
@ -1,3 +1,3 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__version__ = '2017.07.09'
|
||||
__version__ = '2017.08.18'
|
||||
|
Loading…
Reference in New Issue
Block a user