Integration of cropdetec-logic into anime_audio_encoder and tv_audio_encoder. Rewrite of readmes
This commit is contained in:
@@ -4,9 +4,9 @@ This is a collection of Python scripts for various video and audio processing ta
|
|||||||
|
|
||||||
## Scripts
|
## Scripts
|
||||||
|
|
||||||
- **[anime_audio_encoder.py](anime_audio_encoder.py)**: A script tailored for encoding anime. It handles Variable Frame Rate (VFR) sources and uses `av1an` for AV1 encoding. For more details, see the [Anime Audio Encoder README](README_Anime%20Audio%20Encoder.md).
|
- **[anime_audio_encoder.py](anime_audio_encoder.py)**: A script tailored for encoding anime. It handles Variable Frame Rate (VFR) sources and uses `av1an` for AV1 encoding. Now supports `--autocrop` to automatically crop black bars using cropdetect logic, applied to the UTVideo intermediate file. For more details, see the [Anime Audio Encoder README](README_Anime%20Audio%20Encoder.md).
|
||||||
|
|
||||||
- **[tv_audio_encoder.py](tv_audio_encoder.py)**: A script designed for encoding TV show episodes. It uses `alabamaEncoder` for the video encoding process. For more details, see the [TV Audio Encoder README](README_TV%20Audio%20Encoder.md).
|
- **[tv_audio_encoder.py](tv_audio_encoder.py)**: A script designed for encoding TV show episodes. It uses `alabamaEncoder` for the video encoding process. Now supports `--autocrop` to automatically crop black bars using cropdetect logic, applied to the UTVideo intermediate file. For more details, see the [TV Audio Encoder README](README_TV%20Audio%20Encoder.md).
|
||||||
|
|
||||||
- **[MkvOpusEnc.py](MkvOpusEnc.py)**: A cross-platform script for batch-processing audio tracks in MKV files to the Opus format. For more details, see the [MkvOpusEnc README](README_MkvOpusEnc.md).
|
- **[MkvOpusEnc.py](MkvOpusEnc.py)**: A cross-platform script for batch-processing audio tracks in MKV files to the Opus format. For more details, see the [MkvOpusEnc README](README_MkvOpusEnc.md).
|
||||||
|
|
||||||
|
|||||||
@@ -43,6 +43,15 @@ The following command-line tools must be installed and available in your system'
|
|||||||
./anime_audio_encoder.py --no-downmix
|
./anime_audio_encoder.py --no-downmix
|
||||||
```
|
```
|
||||||
|
|
||||||
|
* `--autocrop`: Automatically detect and crop black bars from video using cropdetect. The crop is applied only to the UTVideo intermediate file, ensuring no image data is lost even with variable crops.
|
||||||
|
```bash
|
||||||
|
./anime_audio_encoder.py --autocrop
|
||||||
|
```
|
||||||
|
You can combine with `--no-downmix`:
|
||||||
|
```bash
|
||||||
|
./anime_audio_encoder.py --autocrop --no-downmix
|
||||||
|
```
|
||||||
|
|
||||||
## Output
|
## Output
|
||||||
|
|
||||||
* Processed files are moved to the `completed/` directory.
|
* Processed files are moved to the `completed/` directory.
|
||||||
|
|||||||
@@ -46,6 +46,20 @@ The following command-line tools must be installed and available in your system'
|
|||||||
./tv_audio_encoder.py --no-downmix
|
./tv_audio_encoder.py --no-downmix
|
||||||
```
|
```
|
||||||
|
|
||||||
|
* `--autocrop`: Automatically detect and crop black bars from video using cropdetect. The crop is applied only to the UTVideo intermediate file, ensuring no image data is lost even with variable crops.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./tv_audio_encoder.py --autocrop
|
||||||
|
```
|
||||||
|
|
||||||
|
You can combine with `--no-downmix`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./tv_audio_encoder.py --autocrop --no-downmix
|
||||||
|
```
|
||||||
|
|
||||||
## Output
|
## Output
|
||||||
|
|
||||||
* Processed files are moved to the `completed/` directory.
|
* Processed files are moved to the `completed/` directory.
|
||||||
@@ -55,4 +69,5 @@ The following command-line tools must be installed and available in your system'
|
|||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
* This script is intended for use on **Linux** only.
|
* This script is intended for use on **Linux** only.
|
||||||
* The entire process, especially the AV1 encoding, can be very time-consuming and CPU
|
* The entire process, especially the AV1 encoding, can be very time-consuming and CPU-intensive. Be prepared for long processing times, especially on large files or less powerful machines.
|
||||||
|
* Consider testing with a single file first to fine-tune your desired settings before batch processing a large library of videos.
|
||||||
@@ -1,32 +1,30 @@
|
|||||||
# Advanced Crop Detection Script
|
# Advanced Crop Detection Script
|
||||||
|
|
||||||
This Python script provides a robust and intelligent way to detect the correct crop values for video files. It goes far beyond a simple `ffmpeg-cropdetect` wrapper by using parallel processing and a series of smart heuristics to provide accurate and reliable recommendations, even for complex videos with mixed aspect ratios.
|
This Python script (`cropdetect.py`) provides robust, parallelized, and intelligent crop detection for video files. It is much more than a simple wrapper for `ffmpeg-cropdetect`—it uses parallel processing, aspect ratio heuristics, luma verification, and bounding box logic to recommend safe crop values, even for complex videos with mixed aspect ratios.
|
||||||
|
|
||||||
## Key Features
|
## Key Features
|
||||||
|
|
||||||
- **Parallel Processing**: Analyzes video segments in parallel to significantly speed up the detection process on multi-core systems.
|
- **Parallel Processing:** Analyzes video segments in parallel for speed and reliability.
|
||||||
- **Smart Aspect Ratio Snapping**: Automatically "snaps" detected crop values to known cinematic standards (e.g., 1.85:1, 2.39:1, 16:9, 4:3), correcting for minor detection errors.
|
- **Aspect Ratio Snapping:** Automatically snaps detected crops to known cinematic standards (16:9, 2.39:1, 1.85:1, 4:3, IMAX, etc.), correcting minor detection errors.
|
||||||
- **Mixed Aspect Ratio Detection**: Intelligently identifies videos that switch aspect ratios (e.g., IMAX scenes in a widescreen movie) and warns the user against applying a single, destructive crop.
|
- **Mixed Aspect Ratio Handling:** Detects and safely handles videos with changing aspect ratios (e.g., IMAX scenes), recommending a bounding box crop that never cuts into image data.
|
||||||
- **Credits & Logo Filtering**: Automatically detects and ignores crop values that only appear in the first or last 5% of the video, preventing opening logos or closing credits from influencing the result.
|
- **Luma Verification:** Discards unreliable crop detections from very dark scenes using a second analysis pass.
|
||||||
- **Luma Verification**: Performs a second analysis pass on frames with unidentified aspect ratios. If a frame is too dark, the detection is discarded as unreliable, preventing false positives from dark scenes.
|
- **Credits/Logo Filtering:** Ignores crops that only appear in the first/last 5% of the video, preventing opening logos or credits from affecting the result.
|
||||||
- **Sanity Checks**: Provides context-aware warnings, such as when it suggests cropping a 4:3 video into a widescreen format.
|
- **No Crop Recommendation:** If the video is overwhelmingly detected as not needing a crop, the script will confidently recommend leaving it as is.
|
||||||
- **"No Crop" Logic**: If a video is overwhelmingly detected as not needing a crop (>95% of samples), it will confidently recommend leaving it as is, ignoring insignificant variations.
|
- **User-Friendly Output:** Color-coded recommendations and warnings for easy review.
|
||||||
- **User-Friendly Output**: Uses color-coded text to make recommendations and warnings easy to read at a glance.
|
- **Safe for Automation:** The recommended crop is always the most outer cropable frame, so no image data is lost—even with variable crops.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
1. **Python 3**: The script is written for Python 3.
|
- **Python 3**
|
||||||
2. **FFmpeg**: Both `ffmpeg` and `ffprobe` must be installed and accessible in your system's `PATH`. The script will check for these on startup.
|
- **FFmpeg**: Both `ffmpeg` and `ffprobe` must be installed and in your system's `PATH`.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
No complex installation is required. Simply save the script as `cropdetect.py` and ensure it is executable.
|
Just save the script as `cropdetect.py` and make it executable if needed.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
Run the script from your terminal, passing the path to the video file as an argument.
|
Run the script from your terminal, passing the path to the video file as an argument:
|
||||||
|
|
||||||
### Basic Usage
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python cropdetect.py "path/to/your/video.mkv"
|
python cropdetect.py "path/to/your/video.mkv"
|
||||||
@@ -34,91 +32,45 @@ python cropdetect.py "path/to/your/video.mkv"
|
|||||||
|
|
||||||
### Options
|
### Options
|
||||||
|
|
||||||
- `-j, --jobs`: Specify the number of parallel processes to use for analysis. By default, it uses half of your available CPU cores.
|
- `-n, --num_workers`: Number of parallel worker threads (default: half your CPU cores).
|
||||||
```bash
|
- `-sct, --significant_crop_threshold`: Percentage a crop must be present to be considered significant (default: 5.0).
|
||||||
# Use 8 parallel jobs
|
- `-mc, --min_crop`: Minimum pixels to crop on any side for it to be considered a major crop (default: 10).
|
||||||
python cropdetect.py "path/to/video.mkv" --jobs 8
|
- `--debug`: Enable detailed debug logging.
|
||||||
```
|
|
||||||
- `-i, --interval`: Set the time interval (in seconds) between video samples. A smaller interval is more thorough but slower. The default is 30 seconds.
|
|
||||||
```bash
|
|
||||||
# Analyze the video every 15 seconds
|
|
||||||
python cropdetect.py "path/to/video.mkv" --interval 15
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example Output
|
## Example Output
|
||||||
|
|
||||||
### Confident Crop Recommendation
|
### Confident Crop Recommendation
|
||||||
|
|
||||||
For a standard widescreen movie, the output will be clear and simple.
|
For a standard widescreen movie:
|
||||||
|
|
||||||
```
|
```
|
||||||
--- Prerequisite Check ---
|
|
||||||
All required tools found.
|
|
||||||
|
|
||||||
Video properties: 3840x2160, 7588.66s. Analyzing with up to 16 parallel jobs...
|
|
||||||
|
|
||||||
--- Starting Analysis ---
|
|
||||||
Analyzing Segments: 252/252 completed...
|
|
||||||
|
|
||||||
--- Final Verdict ---
|
|
||||||
--- Credits/Logo Detection ---
|
|
||||||
Ignoring 55 crop value(s) that appear only in the first/last 5% of the video.
|
|
||||||
|
|
||||||
--- Luma Verification ---
|
|
||||||
Verifying scenes: 97/97 completed...
|
|
||||||
Ignoring 347 detections that occurred in very dark scenes.
|
|
||||||
|
|
||||||
Analysis complete.
|
|
||||||
The video consistently uses the 'Widescreen (Flat)' aspect ratio.
|
|
||||||
Recommended crop filter: -vf crop=3840:2080:0:40
|
Recommended crop filter: -vf crop=3840:2080:0:40
|
||||||
```
|
```
|
||||||
|
|
||||||
### Mixed Aspect Ratio Warning
|
### Mixed Aspect Ratio Warning
|
||||||
|
|
||||||
For a movie with changing aspect ratios, the script will advise against cropping.
|
For a movie with changing aspect ratios:
|
||||||
|
|
||||||
```
|
```
|
||||||
--- Prerequisite Check ---
|
WARNING: Potentially Mixed Aspect Ratios Detected!
|
||||||
All required tools found.
|
|
||||||
|
|
||||||
Video properties: 1920x1080, 3640.90s. Analyzing with up to 16 parallel jobs...
|
|
||||||
|
|
||||||
--- Starting Analysis ---
|
|
||||||
Analyzing Segments: 121/121 completed...
|
|
||||||
|
|
||||||
--- Final Verdict ---
|
|
||||||
--- Credits/Logo Detection ---
|
|
||||||
Ignoring 15 crop value(s) that appear only in the first/last 5% of the video.
|
|
||||||
|
|
||||||
--- Luma Verification ---
|
|
||||||
Verifying scenes: 121/121 completed...
|
|
||||||
Ignoring 737 detections that occurred in very dark scenes.
|
|
||||||
|
|
||||||
--- WARNING: Potentially Mixed Aspect Ratios Detected! ---
|
|
||||||
The dominant aspect ratio is 'Widescreen (Scope)' (crop=1920:808:0:136), found in 96.2% of samples.
|
|
||||||
However, other significantly different aspect ratios were also detected, although less frequently.
|
|
||||||
|
|
||||||
Recommendation: Manually check the video before applying a single crop.
|
Recommendation: Manually check the video before applying a single crop.
|
||||||
You can review the next most common detections below:
|
|
||||||
- 'Fullscreen (4:3)' (crop=1440:1080:240:0) was detected 69 time(s) (3.8%).
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### No Crop Needed
|
### No Crop Needed
|
||||||
|
|
||||||
For a video that is already perfectly formatted (e.g., a 4:3 TV show), the script will recommend doing nothing.
|
For a video that is already perfectly formatted:
|
||||||
|
|
||||||
```
|
```
|
||||||
--- Prerequisite Check ---
|
|
||||||
All required tools found.
|
|
||||||
|
|
||||||
Video properties: 768x576, 1770.78s. Analyzing with up to 16 parallel jobs...
|
|
||||||
|
|
||||||
--- Starting Analysis ---
|
|
||||||
Analyzing Segments: 58/58 completed...
|
|
||||||
|
|
||||||
--- Final Verdict ---
|
|
||||||
Analysis complete.
|
|
||||||
The video is overwhelmingly 'Fullscreen (4:3)' and does not require cropping.
|
|
||||||
Minor aspect ratio variations were detected but are considered insignificant due to their low frequency.
|
|
||||||
Recommendation: No crop needed.
|
Recommendation: No crop needed.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Integration with Other Scripts
|
||||||
|
|
||||||
|
This crop detection logic is now integrated into `anime_audio_encoder.py` and `tv_audio_encoder.py` via the `--autocrop` option. When enabled, those scripts will automatically detect and apply the safest crop to the UTVideo intermediate file, ensuring no image data is lost—even with variable crops.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script is safe for automation and batch workflows.
|
||||||
|
- The recommended crop will never cut into the actual image, only remove black bars.
|
||||||
|
- For complex videos, a bounding box crop is calculated to contain all significant scenes.
|
||||||
|
- If no crop is needed, none will be applied.
|
||||||
@@ -83,7 +83,7 @@ def convert_audio_track(index, ch, lang, audio_temp_dir, source_file, should_dow
|
|||||||
])
|
])
|
||||||
return final_opus
|
return final_opus
|
||||||
|
|
||||||
def convert_video(source_file_base, source_file_full, is_vfr, target_cfr_fps_for_handbrake):
|
def convert_video(source_file_base, source_file_full, is_vfr, target_cfr_fps_for_handbrake, autocrop_filter=None):
|
||||||
print(" --- Starting Video Processing ---")
|
print(" --- Starting Video Processing ---")
|
||||||
# source_file_base is file_path.stem (e.g., "my.anime.episode.01")
|
# source_file_base is file_path.stem (e.g., "my.anime.episode.01")
|
||||||
scene_file = Path(f"{source_file_base}.txt")
|
scene_file = Path(f"{source_file_base}.txt")
|
||||||
@@ -143,7 +143,10 @@ def convert_video(source_file_base, source_file_full, is_vfr, target_cfr_fps_for
|
|||||||
ffmpeg_args = [
|
ffmpeg_args = [
|
||||||
"ffmpeg", "-hide_banner", "-v", "quiet", "-stats", "-y", "-i", str(current_input_for_utvideo),
|
"ffmpeg", "-hide_banner", "-v", "quiet", "-stats", "-y", "-i", str(current_input_for_utvideo),
|
||||||
"-map", "0:v:0", "-map_metadata", "-1", "-map_chapters", "-1", "-an", "-sn", "-dn",
|
"-map", "0:v:0", "-map_metadata", "-1", "-map_chapters", "-1", "-an", "-sn", "-dn",
|
||||||
] + video_codec_args + [str(ut_video_file)]
|
]
|
||||||
|
if autocrop_filter:
|
||||||
|
ffmpeg_args += ["-vf", autocrop_filter]
|
||||||
|
ffmpeg_args += video_codec_args + [str(ut_video_file)]
|
||||||
run_cmd(ffmpeg_args)
|
run_cmd(ffmpeg_args)
|
||||||
ut_video_full_path = os.path.abspath(ut_video_file)
|
ut_video_full_path = os.path.abspath(ut_video_file)
|
||||||
vpy_script_content = f"""import vapoursynth as vs
|
vpy_script_content = f"""import vapoursynth as vs
|
||||||
@@ -207,111 +210,301 @@ def is_ffmpeg_decodable(file_path):
|
|||||||
except subprocess.CalledProcessError:
|
except subprocess.CalledProcessError:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def main(no_downmix=False):
|
# --- CROPDETECT LOGIC FROM cropdetect.py ---
|
||||||
|
import argparse as _argparse_cropdetect
|
||||||
|
import multiprocessing as _multiprocessing_cropdetect
|
||||||
|
from collections import Counter as _Counter_cropdetect
|
||||||
|
|
||||||
|
COLOR_GREEN = "\033[92m"
|
||||||
|
COLOR_RED = "\033[91m"
|
||||||
|
COLOR_YELLOW = "\033[93m"
|
||||||
|
COLOR_RESET = "\033[0m"
|
||||||
|
|
||||||
|
KNOWN_ASPECT_RATIOS = [
|
||||||
|
{"name": "HDTV (16:9)", "ratio": 16/9},
|
||||||
|
{"name": "Widescreen (Scope)", "ratio": 2.39},
|
||||||
|
{"name": "Widescreen (Flat)", "ratio": 1.85},
|
||||||
|
{"name": "IMAX Digital (1.90:1)", "ratio": 1.90},
|
||||||
|
{"name": "Fullscreen (4:3)", "ratio": 4/3},
|
||||||
|
{"name": "IMAX 70mm (1.43:1)", "ratio": 1.43},
|
||||||
|
]
|
||||||
|
|
||||||
|
def _check_prerequisites_cropdetect():
|
||||||
|
for tool in ['ffmpeg', 'ffprobe']:
|
||||||
|
if not shutil.which(tool):
|
||||||
|
print(f"Error: '{tool}' command not found. Is it installed and in your PATH?")
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _analyze_segment_cropdetect(task_args):
|
||||||
|
seek_time, input_file, width, height = task_args
|
||||||
|
ffmpeg_args = [
|
||||||
|
'ffmpeg', '-hide_banner',
|
||||||
|
'-ss', str(seek_time),
|
||||||
|
'-i', input_file, '-t', '1', '-vf', 'cropdetect',
|
||||||
|
'-f', 'null', '-'
|
||||||
|
]
|
||||||
|
result = subprocess.run(ffmpeg_args, capture_output=True, text=True, encoding='utf-8')
|
||||||
|
if result.returncode != 0:
|
||||||
|
return []
|
||||||
|
crop_detections = re.findall(r'crop=(\d+):(\d+):(\d+):(\d+)', result.stderr)
|
||||||
|
significant_crops = []
|
||||||
|
for w_str, h_str, x_str, y_str in crop_detections:
|
||||||
|
w, h, x, y = map(int, [w_str, h_str, x_str, y_str])
|
||||||
|
significant_crops.append((f"crop={w}:{h}:{x}:{y}", seek_time))
|
||||||
|
return significant_crops
|
||||||
|
|
||||||
|
def _snap_to_known_ar_cropdetect(w, h, x, y, video_w, video_h, tolerance=0.03):
|
||||||
|
if h == 0: return f"crop={w}:{h}:{x}:{y}", None
|
||||||
|
detected_ratio = w / h
|
||||||
|
best_match = None
|
||||||
|
smallest_diff = float('inf')
|
||||||
|
for ar in KNOWN_ASPECT_RATIOS:
|
||||||
|
diff = abs(detected_ratio - ar['ratio'])
|
||||||
|
if diff < smallest_diff:
|
||||||
|
smallest_diff = diff
|
||||||
|
best_match = ar
|
||||||
|
if not best_match or (smallest_diff / best_match['ratio']) >= tolerance:
|
||||||
|
return f"crop={w}:{h}:{x}:{y}", None
|
||||||
|
if abs(w - video_w) < 16:
|
||||||
|
new_h = round(video_w / best_match['ratio'])
|
||||||
|
if new_h % 8 != 0:
|
||||||
|
new_h = new_h + (8 - (new_h % 8))
|
||||||
|
new_y = round((video_h - new_h) / 2)
|
||||||
|
if new_y % 2 != 0:
|
||||||
|
new_y -= 1
|
||||||
|
return f"crop={video_w}:{new_h}:0:{new_y}", best_match['name']
|
||||||
|
if abs(h - video_h) < 16:
|
||||||
|
new_w = round(video_h * best_match['ratio'])
|
||||||
|
if new_w % 8 != 0:
|
||||||
|
new_w = new_w + (8 - (new_w % 8))
|
||||||
|
new_x = round((video_w - new_w) / 2)
|
||||||
|
if new_x % 2 != 0:
|
||||||
|
new_x -= 1
|
||||||
|
return f"crop={new_w}:{video_h}:{new_x}:0", best_match['name']
|
||||||
|
return f"crop={w}:{h}:{x}:{y}", None
|
||||||
|
|
||||||
|
def _cluster_crop_values_cropdetect(crop_counts, tolerance=8):
|
||||||
|
clusters = []
|
||||||
|
temp_counts = crop_counts.copy()
|
||||||
|
while temp_counts:
|
||||||
|
center_str, _ = temp_counts.most_common(1)[0]
|
||||||
|
try:
|
||||||
|
_, values = center_str.split('=');
|
||||||
|
cw, ch, cx, cy = map(int, values.split(':'))
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
del temp_counts[center_str]
|
||||||
|
continue
|
||||||
|
cluster_total_count = 0
|
||||||
|
crops_to_remove = []
|
||||||
|
for crop_str, count in temp_counts.items():
|
||||||
|
try:
|
||||||
|
_, values = crop_str.split('=');
|
||||||
|
w, h, x, y = map(int, values.split(':'))
|
||||||
|
if abs(x - cx) <= tolerance and abs(y - cy) <= tolerance:
|
||||||
|
cluster_total_count += count
|
||||||
|
crops_to_remove.append(crop_str)
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
continue
|
||||||
|
if cluster_total_count > 0:
|
||||||
|
clusters.append({'center': center_str, 'count': cluster_total_count})
|
||||||
|
for crop_str in crops_to_remove:
|
||||||
|
del temp_counts[crop_str]
|
||||||
|
clusters.sort(key=lambda c: c['count'], reverse=True)
|
||||||
|
return clusters
|
||||||
|
|
||||||
|
def _parse_crop_string_cropdetect(crop_str):
|
||||||
|
try:
|
||||||
|
_, values = crop_str.split('=');
|
||||||
|
w, h, x, y = map(int, values.split(':'))
|
||||||
|
return {'w': w, 'h': h, 'x': x, 'y': y}
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _calculate_bounding_box_cropdetect(crop_keys):
|
||||||
|
min_x = min_w = min_y = min_h = float('inf')
|
||||||
|
max_x = max_w = max_y = max_h = float('-inf')
|
||||||
|
for key in crop_keys:
|
||||||
|
parsed = _parse_crop_string_cropdetect(key)
|
||||||
|
if not parsed:
|
||||||
|
continue
|
||||||
|
w, h, x, y = parsed['w'], parsed['h'], parsed['x'], parsed['y']
|
||||||
|
min_x = min(min_x, x)
|
||||||
|
min_y = min(min_y, y)
|
||||||
|
max_x = max(max_x, x + w)
|
||||||
|
max_y = max(max_y, y + h)
|
||||||
|
min_w = min(min_w, w)
|
||||||
|
min_h = min(min_h, h)
|
||||||
|
max_w = max(max_w, w)
|
||||||
|
max_h = max(max_h, h)
|
||||||
|
if (max_x - min_x) <= 2 and (max_y - min_y) <= 2:
|
||||||
|
return None
|
||||||
|
bounding_crop = f"crop={max_x - min_x}:{max_y - min_y}:{min_x}:{min_y}"
|
||||||
|
return bounding_crop
|
||||||
|
|
||||||
|
def _analyze_video_cropdetect(input_file, duration, width, height, num_workers, significant_crop_threshold, min_crop, debug=False):
|
||||||
|
num_tasks = num_workers * 4
|
||||||
|
segment_duration = max(1, duration // num_tasks)
|
||||||
|
tasks = [(i * segment_duration, input_file, width, height) for i in range(num_tasks)]
|
||||||
|
crop_results = []
|
||||||
|
with _multiprocessing_cropdetect.Pool(processes=num_workers) as pool:
|
||||||
|
results_iterator = pool.imap_unordered(_analyze_segment_cropdetect, tasks)
|
||||||
|
for result in results_iterator:
|
||||||
|
crop_results.append(result)
|
||||||
|
all_crops_with_ts = [crop for sublist in crop_results for crop in sublist]
|
||||||
|
all_crop_strings = [item[0] for item in all_crops_with_ts]
|
||||||
|
if not all_crop_strings:
|
||||||
|
return None
|
||||||
|
crop_counts = _Counter_cropdetect(all_crop_strings)
|
||||||
|
clusters = _cluster_crop_values_cropdetect(crop_counts)
|
||||||
|
total_detections = sum(c['count'] for c in clusters)
|
||||||
|
significant_clusters = []
|
||||||
|
for cluster in clusters:
|
||||||
|
percentage = (cluster['count'] / total_detections) * 100
|
||||||
|
if percentage >= significant_crop_threshold:
|
||||||
|
significant_clusters.append(cluster)
|
||||||
|
for cluster in significant_clusters:
|
||||||
|
parsed_crop = _parse_crop_string_cropdetect(cluster['center'])
|
||||||
|
if parsed_crop:
|
||||||
|
_, ar_label = _snap_to_known_ar_cropdetect(
|
||||||
|
parsed_crop['w'], parsed_crop['h'], parsed_crop['x'], parsed_crop['y'], width, height
|
||||||
|
)
|
||||||
|
cluster['ar_label'] = ar_label
|
||||||
|
else:
|
||||||
|
cluster['ar_label'] = None
|
||||||
|
if not significant_clusters:
|
||||||
|
return None
|
||||||
|
elif len(significant_clusters) == 1:
|
||||||
|
dominant_cluster = significant_clusters[0]
|
||||||
|
parsed_crop = _parse_crop_string_cropdetect(dominant_cluster['center'])
|
||||||
|
snapped_crop, ar_label = _snap_to_known_ar_cropdetect(
|
||||||
|
parsed_crop['w'], parsed_crop['h'], parsed_crop['x'], parsed_crop['y'], width, height
|
||||||
|
)
|
||||||
|
parsed_snapped = _parse_crop_string_cropdetect(snapped_crop)
|
||||||
|
if parsed_snapped and parsed_snapped['w'] == width and parsed_snapped['h'] == height:
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
return snapped_crop
|
||||||
|
else:
|
||||||
|
crop_keys = [c['center'] for c in significant_clusters]
|
||||||
|
bounding_box_crop = _calculate_bounding_box_cropdetect(crop_keys)
|
||||||
|
if bounding_box_crop:
|
||||||
|
parsed_bb = _parse_crop_string_cropdetect(bounding_box_crop)
|
||||||
|
snapped_crop, ar_label = _snap_to_known_ar_cropdetect(
|
||||||
|
parsed_bb['w'], parsed_bb['h'], parsed_bb['x'], parsed_bb['y'], width, height
|
||||||
|
)
|
||||||
|
parsed_snapped = _parse_crop_string_cropdetect(snapped_crop)
|
||||||
|
if parsed_snapped and parsed_snapped['w'] == width and parsed_snapped['h'] == height:
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
return snapped_crop
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def detect_autocrop_filter(input_file, significant_crop_threshold=5.0, min_crop=10, debug=False):
|
||||||
|
if not _check_prerequisites_cropdetect():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
probe_duration_args = [
|
||||||
|
'ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1',
|
||||||
|
input_file
|
||||||
|
]
|
||||||
|
duration_str = subprocess.check_output(probe_duration_args, stderr=subprocess.STDOUT, text=True)
|
||||||
|
duration = int(float(duration_str))
|
||||||
|
probe_res_args = [
|
||||||
|
'ffprobe', '-v', 'error',
|
||||||
|
'-select_streams', 'v',
|
||||||
|
'-show_entries', 'stream=width,height,disposition',
|
||||||
|
'-of', 'json',
|
||||||
|
input_file
|
||||||
|
]
|
||||||
|
probe_output = subprocess.check_output(probe_res_args, stderr=subprocess.STDOUT, text=True)
|
||||||
|
streams_data = json.loads(probe_output)
|
||||||
|
video_stream = None
|
||||||
|
for stream in streams_data.get('streams', []):
|
||||||
|
if stream.get('disposition', {}).get('attached_pic', 0) == 0:
|
||||||
|
video_stream = stream
|
||||||
|
break
|
||||||
|
if not video_stream or 'width' not in video_stream or 'height' not in video_stream:
|
||||||
|
return None
|
||||||
|
width = int(video_stream['width'])
|
||||||
|
height = int(video_stream['height'])
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
return _analyze_video_cropdetect(input_file, duration, width, height, max(1, os.cpu_count() // 2), significant_crop_threshold, min_crop, debug)
|
||||||
|
|
||||||
|
def main(no_downmix=False, autocrop=False):
|
||||||
check_tools()
|
check_tools()
|
||||||
|
|
||||||
current_dir = Path(".")
|
current_dir = Path(".")
|
||||||
|
|
||||||
# Check if there are any MKV files to process before creating directories
|
|
||||||
files_to_process = sorted(
|
files_to_process = sorted(
|
||||||
f for f in current_dir.glob("*.mkv")
|
f for f in current_dir.glob("*.mkv")
|
||||||
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-") or f.name.endswith(".cfr_temp.mkv"))
|
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-") or f.name.endswith(".cfr_temp.mkv"))
|
||||||
)
|
)
|
||||||
|
|
||||||
if not files_to_process:
|
if not files_to_process:
|
||||||
print("No MKV files found to process. Exiting.")
|
print("No MKV files found to process. Exiting.")
|
||||||
return # Exit without creating directories
|
return
|
||||||
|
|
||||||
# Only create directories when we actually have files to process
|
|
||||||
DIR_COMPLETED.mkdir(exist_ok=True, parents=True)
|
DIR_COMPLETED.mkdir(exist_ok=True, parents=True)
|
||||||
DIR_ORIGINAL.mkdir(exist_ok=True, parents=True)
|
DIR_ORIGINAL.mkdir(exist_ok=True, parents=True)
|
||||||
DIR_CONV_LOGS.mkdir(exist_ok=True, parents=True) # Create conv_logs directory
|
DIR_CONV_LOGS.mkdir(exist_ok=True, parents=True)
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
files_to_process = sorted(
|
files_to_process = sorted(
|
||||||
f for f in current_dir.glob("*.mkv")
|
f for f in current_dir.glob("*.mkv")
|
||||||
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-") or f.name.endswith(".cfr_temp.mkv"))
|
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-") or f.name.endswith(".cfr_temp.mkv"))
|
||||||
)
|
)
|
||||||
|
|
||||||
if not files_to_process:
|
if not files_to_process:
|
||||||
print("No more .mkv files found to process in the current directory. The script will now exit.")
|
print("No more .mkv files found to process in the current directory. The script will now exit.")
|
||||||
break
|
break
|
||||||
|
|
||||||
# Process the first file in the list. The list is requeried in the next iteration.
|
|
||||||
file_path = files_to_process[0]
|
file_path = files_to_process[0]
|
||||||
|
|
||||||
# --- Add ffmpeg decodability check here ---
|
|
||||||
if not is_ffmpeg_decodable(file_path):
|
if not is_ffmpeg_decodable(file_path):
|
||||||
print(f"ERROR: ffmpeg cannot decode '{file_path.name}'. Skipping this file.", file=sys.stderr)
|
print(f"ERROR: ffmpeg cannot decode '{file_path.name}'. Skipping this file.", file=sys.stderr)
|
||||||
shutil.move(str(file_path), DIR_ORIGINAL / file_path.name)
|
shutil.move(str(file_path), DIR_ORIGINAL / file_path.name)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
print("-" * shutil.get_terminal_size(fallback=(80, 24)).columns)
|
print("-" * shutil.get_terminal_size(fallback=(80, 24)).columns)
|
||||||
# This print remains on the console, indicating which file is starting.
|
log_file_name = f"{file_path.stem}.log"
|
||||||
# The detailed "Starting full processing for..." will be in the log.
|
|
||||||
|
|
||||||
log_file_name = f"{file_path.stem}.log" # Use stem to avoid .mkv.log
|
|
||||||
log_file_path = DIR_CONV_LOGS / log_file_name
|
log_file_path = DIR_CONV_LOGS / log_file_name
|
||||||
|
|
||||||
original_stdout_console = sys.stdout
|
original_stdout_console = sys.stdout
|
||||||
original_stderr_console = sys.stderr
|
original_stderr_console = sys.stderr
|
||||||
|
|
||||||
# Announce to console (original stdout)
|
|
||||||
print(f"Processing: {file_path.name}", file=original_stdout_console)
|
print(f"Processing: {file_path.name}", file=original_stdout_console)
|
||||||
print(f"Logging output to: {log_file_path}", file=original_stdout_console)
|
print(f"Logging output to: {log_file_path}", file=original_stdout_console)
|
||||||
|
|
||||||
log_file_handle = None
|
log_file_handle = None
|
||||||
processing_error_occurred = False
|
processing_error_occurred = False
|
||||||
date_for_runtime_calc = datetime.now() # For runtime calculation
|
date_for_runtime_calc = datetime.now()
|
||||||
|
try:
|
||||||
try: # Outer try for log redirection and file handling
|
|
||||||
log_file_handle = open(log_file_path, 'w', encoding='utf-8')
|
log_file_handle = open(log_file_path, 'w', encoding='utf-8')
|
||||||
sys.stdout = log_file_handle
|
sys.stdout = log_file_handle
|
||||||
sys.stderr = log_file_handle
|
sys.stderr = log_file_handle
|
||||||
|
|
||||||
# --- Start of log-specific messages ---
|
|
||||||
print(f"STARTING LOG FOR: {file_path.name}")
|
print(f"STARTING LOG FOR: {file_path.name}")
|
||||||
print(f"Processing started at: {date_for_runtime_calc}")
|
print(f"Processing started at: {date_for_runtime_calc}")
|
||||||
print(f"Full input file path: {file_path.resolve()}")
|
print(f"Full input file path: {file_path.resolve()}")
|
||||||
print("-" * shutil.get_terminal_size(fallback=(80, 24)).columns)
|
print("-" * shutil.get_terminal_size(fallback=(80, 24)).columns)
|
||||||
|
input_file_abs = file_path.resolve()
|
||||||
input_file_abs = file_path.resolve() # Used by original logic
|
intermediate_output_file = current_dir / f"output-{file_path.name}"
|
||||||
intermediate_output_file = current_dir / f"output-{file_path.name}" # Used by original logic
|
audio_temp_dir = None
|
||||||
audio_temp_dir = None # Initialize before inner try
|
handbrake_intermediate_for_cleanup = None
|
||||||
handbrake_intermediate_for_cleanup = None # Initialize before inner try
|
|
||||||
|
|
||||||
# This is the original try...except...finally block for processing a single file.
|
|
||||||
# All its print statements will now go to the log file.
|
|
||||||
try:
|
try:
|
||||||
audio_temp_dir = tempfile.mkdtemp(prefix="anime_audio_")
|
audio_temp_dir = tempfile.mkdtemp(prefix="anime_audio_")
|
||||||
print(f"Audio temporary directory created at: {audio_temp_dir}")
|
print(f"Audio temporary directory created at: {audio_temp_dir}")
|
||||||
print(f"Analyzing file: {input_file_abs}")
|
print(f"Analyzing file: {input_file_abs}")
|
||||||
|
|
||||||
ffprobe_info_json = run_cmd([
|
ffprobe_info_json = run_cmd([
|
||||||
"ffprobe", "-v", "quiet", "-print_format", "json", "-show_streams", "-show_format", str(input_file_abs)
|
"ffprobe", "-v", "quiet", "-print_format", "json", "-show_streams", "-show_format", str(input_file_abs)
|
||||||
], capture_output=True)
|
], capture_output=True)
|
||||||
ffprobe_info = json.loads(ffprobe_info_json)
|
ffprobe_info = json.loads(ffprobe_info_json)
|
||||||
|
|
||||||
mkvmerge_info_json = run_cmd([
|
mkvmerge_info_json = run_cmd([
|
||||||
"mkvmerge", "-J", str(input_file_abs)
|
"mkvmerge", "-J", str(input_file_abs)
|
||||||
], capture_output=True)
|
], capture_output=True)
|
||||||
mkv_info = json.loads(mkvmerge_info_json)
|
mkv_info = json.loads(mkvmerge_info_json)
|
||||||
|
|
||||||
mediainfo_json = run_cmd([
|
mediainfo_json = run_cmd([
|
||||||
"mediainfo", "--Output=JSON", "-f", str(input_file_abs)
|
"mediainfo", "--Output=JSON", "-f", str(input_file_abs)
|
||||||
], capture_output=True)
|
], capture_output=True)
|
||||||
media_info = json.loads(mediainfo_json)
|
media_info = json.loads(mediainfo_json)
|
||||||
|
|
||||||
is_vfr = False
|
is_vfr = False
|
||||||
target_cfr_fps_for_handbrake = None
|
target_cfr_fps_for_handbrake = None
|
||||||
video_track_info = None
|
video_track_info = None
|
||||||
|
|
||||||
if media_info.get("media") and media_info["media"].get("track"):
|
if media_info.get("media") and media_info["media"].get("track"):
|
||||||
for track in media_info["media"]["track"]:
|
for track in media_info["media"]["track"]:
|
||||||
if track.get("@type") == "Video":
|
if track.get("@type") == "Video":
|
||||||
video_track_info = track
|
video_track_info = track
|
||||||
break
|
break
|
||||||
|
|
||||||
if video_track_info:
|
if video_track_info:
|
||||||
frame_rate_mode = video_track_info.get("FrameRate_Mode")
|
frame_rate_mode = video_track_info.get("FrameRate_Mode")
|
||||||
if frame_rate_mode and frame_rate_mode.upper() in ["VFR", "VARIABLE"]:
|
if frame_rate_mode and frame_rate_mode.upper() in ["VFR", "VARIABLE"]:
|
||||||
@@ -322,20 +515,16 @@ def main(no_downmix=False):
|
|||||||
match = re.search(r'\((\d+/\d+)\)', original_fps_str)
|
match = re.search(r'\((\d+/\d+)\)', original_fps_str)
|
||||||
if match:
|
if match:
|
||||||
target_cfr_fps_for_handbrake = match.group(1)
|
target_cfr_fps_for_handbrake = match.group(1)
|
||||||
else: # Fallback to decimal part if fraction not in parentheses
|
else:
|
||||||
target_cfr_fps_for_handbrake = video_track_info.get("FrameRate_Original")
|
target_cfr_fps_for_handbrake = video_track_info.get("FrameRate_Original")
|
||||||
|
if not target_cfr_fps_for_handbrake:
|
||||||
if not target_cfr_fps_for_handbrake: # Fallback if Original_String didn't yield
|
|
||||||
target_cfr_fps_for_handbrake = video_track_info.get("FrameRate_Original")
|
target_cfr_fps_for_handbrake = video_track_info.get("FrameRate_Original")
|
||||||
|
if not target_cfr_fps_for_handbrake:
|
||||||
if not target_cfr_fps_for_handbrake: # Further fallback to current FrameRate
|
|
||||||
target_cfr_fps_for_handbrake = video_track_info.get("FrameRate")
|
target_cfr_fps_for_handbrake = video_track_info.get("FrameRate")
|
||||||
if target_cfr_fps_for_handbrake:
|
if target_cfr_fps_for_handbrake:
|
||||||
print(f" - Using MediaInfo FrameRate ({target_cfr_fps_for_handbrake}) as fallback for HandBrake target FPS.")
|
print(f" - Using MediaInfo FrameRate ({target_cfr_fps_for_handbrake}) as fallback for HandBrake target FPS.")
|
||||||
|
|
||||||
if target_cfr_fps_for_handbrake:
|
if target_cfr_fps_for_handbrake:
|
||||||
print(f" - Target CFR for HandBrake: {target_cfr_fps_for_handbrake}")
|
print(f" - Target CFR for HandBrake: {target_cfr_fps_for_handbrake}")
|
||||||
# Convert fractional FPS to decimal for HandBrakeCLI if needed
|
|
||||||
if isinstance(target_cfr_fps_for_handbrake, str) and "/" in target_cfr_fps_for_handbrake:
|
if isinstance(target_cfr_fps_for_handbrake, str) and "/" in target_cfr_fps_for_handbrake:
|
||||||
try:
|
try:
|
||||||
num, den = map(float, target_cfr_fps_for_handbrake.split('/'))
|
num, den = map(float, target_cfr_fps_for_handbrake.split('/'))
|
||||||
@@ -343,15 +532,22 @@ def main(no_downmix=False):
|
|||||||
print(f" - Converted fractional FPS to decimal for HandBrake: {target_cfr_fps_for_handbrake}")
|
print(f" - Converted fractional FPS to decimal for HandBrake: {target_cfr_fps_for_handbrake}")
|
||||||
except ValueError:
|
except ValueError:
|
||||||
print(f" - Warning: Could not parse fractional FPS '{target_cfr_fps_for_handbrake}'. HandBrakeCLI might fail.")
|
print(f" - Warning: Could not parse fractional FPS '{target_cfr_fps_for_handbrake}'. HandBrakeCLI might fail.")
|
||||||
is_vfr = False # Revert if conversion fails
|
is_vfr = False
|
||||||
else:
|
else:
|
||||||
print(" - Warning: VFR detected, but could not determine target CFR from MediaInfo. Will attempt standard UTVideo conversion without HandBrake.")
|
print(" - Warning: VFR detected, but could not determine target CFR from MediaInfo. Will attempt standard UTVideo conversion without HandBrake.")
|
||||||
is_vfr = False # Revert to non-HandBrake path
|
is_vfr = False
|
||||||
else:
|
else:
|
||||||
print(f" - Video appears to be CFR or FrameRate_Mode not specified as VFR/Variable by MediaInfo.")
|
print(f" - Video appears to be CFR or FrameRate_Mode not specified as VFR/Variable by MediaInfo.")
|
||||||
|
autocrop_filter = None
|
||||||
|
if autocrop:
|
||||||
|
print("--- Running autocrop detection ---")
|
||||||
|
autocrop_filter = detect_autocrop_filter(str(input_file_abs))
|
||||||
|
if autocrop_filter:
|
||||||
|
print(f" - Autocrop filter detected: {autocrop_filter}")
|
||||||
|
else:
|
||||||
|
print(" - No crop needed or detected.")
|
||||||
encoded_video_file, handbrake_intermediate_for_cleanup = convert_video(
|
encoded_video_file, handbrake_intermediate_for_cleanup = convert_video(
|
||||||
file_path.stem, str(input_file_abs), is_vfr, target_cfr_fps_for_handbrake
|
file_path.stem, str(input_file_abs), is_vfr, target_cfr_fps_for_handbrake, autocrop_filter=autocrop_filter
|
||||||
)
|
)
|
||||||
|
|
||||||
print("--- Starting Audio Processing ---")
|
print("--- Starting Audio Processing ---")
|
||||||
@@ -521,7 +717,8 @@ def main(no_downmix=False):
|
|||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
import argparse
|
import argparse
|
||||||
parser = argparse.ArgumentParser(description="Batch-process MKV files with resumable video encoding, audio downmixing, and per-file logging.")
|
parser = argparse.ArgumentParser(description="Batch-process MKV files with resumable video encoding, audio downmixing, per-file logging, and optional autocrop.")
|
||||||
parser.add_argument("--no-downmix", action="store_true", help="Preserve original audio channel layout.")
|
parser.add_argument("--no-downmix", action="store_true", help="Preserve original audio channel layout.")
|
||||||
|
parser.add_argument("--autocrop", action="store_true", help="Automatically detect and crop black bars from video using cropdetect.")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
main(no_downmix=args.no_downmix)
|
main(no_downmix=args.no_downmix, autocrop=args.autocrop)
|
||||||
|
|||||||
@@ -105,7 +105,7 @@ def convert_audio_track(index, ch, lang, audio_temp_dir, source_file, should_dow
|
|||||||
])
|
])
|
||||||
return final_opus
|
return final_opus
|
||||||
|
|
||||||
def convert_video(source_file_base, source_file_full):
|
def convert_video(source_file_base, source_file_full, autocrop_filter=None):
|
||||||
print(" --- Starting Video Processing ---")
|
print(" --- Starting Video Processing ---")
|
||||||
# source_file_base is the full stem from the original file,
|
# source_file_base is the full stem from the original file,
|
||||||
# e.g., "cheers.s01e04.der.lueckenbuesser.german.dl.fs.1080p.web.h264-cnhd"
|
# e.g., "cheers.s01e04.der.lueckenbuesser.german.dl.fs.1080p.web.h264-cnhd"
|
||||||
@@ -129,7 +129,10 @@ def convert_video(source_file_base, source_file_full):
|
|||||||
ffmpeg_args = [
|
ffmpeg_args = [
|
||||||
"ffmpeg", "-hide_banner", "-v", "quiet", "-stats", "-y", "-i", source_file_full,
|
"ffmpeg", "-hide_banner", "-v", "quiet", "-stats", "-y", "-i", source_file_full,
|
||||||
"-map", "0:v:0", "-map_metadata", "-1", "-map_chapters", "-1", "-an", "-sn", "-dn",
|
"-map", "0:v:0", "-map_metadata", "-1", "-map_chapters", "-1", "-an", "-sn", "-dn",
|
||||||
] + video_codec_args + [str(ut_video_file)]
|
]
|
||||||
|
if autocrop_filter:
|
||||||
|
ffmpeg_args += ["-vf", autocrop_filter]
|
||||||
|
ffmpeg_args += video_codec_args + [str(ut_video_file)]
|
||||||
run_cmd(ffmpeg_args)
|
run_cmd(ffmpeg_args)
|
||||||
|
|
||||||
print(" - Starting video encode with AlabamaEncoder (this will take a long time)...")
|
print(" - Starting video encode with AlabamaEncoder (this will take a long time)...")
|
||||||
@@ -159,58 +162,263 @@ def convert_video(source_file_base, source_file_full):
|
|||||||
print(" --- Finished Video Processing ---")
|
print(" --- Finished Video Processing ---")
|
||||||
return ut_video_file, encoded_video_file
|
return ut_video_file, encoded_video_file
|
||||||
|
|
||||||
def main(no_downmix=False):
|
# --- CROPDETECT LOGIC FROM cropdetect.py ---
|
||||||
|
import multiprocessing as _multiprocessing_cropdetect
|
||||||
|
from collections import Counter as _Counter_cropdetect
|
||||||
|
|
||||||
|
KNOWN_ASPECT_RATIOS = [
|
||||||
|
{"name": "HDTV (16:9)", "ratio": 16/9},
|
||||||
|
{"name": "Widescreen (Scope)", "ratio": 2.39},
|
||||||
|
{"name": "Widescreen (Flat)", "ratio": 1.85},
|
||||||
|
{"name": "IMAX Digital (1.90:1)", "ratio": 1.90},
|
||||||
|
{"name": "Fullscreen (4:3)", "ratio": 4/3},
|
||||||
|
{"name": "IMAX 70mm (1.43:1)", "ratio": 1.43},
|
||||||
|
]
|
||||||
|
|
||||||
|
def _check_prerequisites_cropdetect():
|
||||||
|
for tool in ['ffmpeg', 'ffprobe']:
|
||||||
|
if not shutil.which(tool):
|
||||||
|
print(f"Error: '{tool}' command not found. Is it installed and in your PATH?")
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _analyze_segment_cropdetect(task_args):
|
||||||
|
seek_time, input_file, width, height = task_args
|
||||||
|
ffmpeg_args = [
|
||||||
|
'ffmpeg', '-hide_banner',
|
||||||
|
'-ss', str(seek_time),
|
||||||
|
'-i', input_file, '-t', '1', '-vf', 'cropdetect',
|
||||||
|
'-f', 'null', '-'
|
||||||
|
]
|
||||||
|
result = subprocess.run(ffmpeg_args, capture_output=True, text=True, encoding='utf-8')
|
||||||
|
if result.returncode != 0:
|
||||||
|
return []
|
||||||
|
import re
|
||||||
|
crop_detections = re.findall(r'crop=(\d+):(\d+):(\d+):(\d+)', result.stderr)
|
||||||
|
significant_crops = []
|
||||||
|
for w_str, h_str, x_str, y_str in crop_detections:
|
||||||
|
w, h, x, y = map(int, [w_str, h_str, x_str, y_str])
|
||||||
|
significant_crops.append((f"crop={w}:{h}:{x}:{y}", seek_time))
|
||||||
|
return significant_crops
|
||||||
|
|
||||||
|
def _snap_to_known_ar_cropdetect(w, h, x, y, video_w, video_h, tolerance=0.03):
|
||||||
|
if h == 0: return f"crop={w}:{h}:{x}:{y}", None
|
||||||
|
detected_ratio = w / h
|
||||||
|
best_match = None
|
||||||
|
smallest_diff = float('inf')
|
||||||
|
for ar in KNOWN_ASPECT_RATIOS:
|
||||||
|
diff = abs(detected_ratio - ar['ratio'])
|
||||||
|
if diff < smallest_diff:
|
||||||
|
smallest_diff = diff
|
||||||
|
best_match = ar
|
||||||
|
if not best_match or (smallest_diff / best_match['ratio']) >= tolerance:
|
||||||
|
return f"crop={w}:{h}:{x}:{y}", None
|
||||||
|
if abs(w - video_w) < 16:
|
||||||
|
new_h = round(video_w / best_match['ratio'])
|
||||||
|
if new_h % 8 != 0:
|
||||||
|
new_h = new_h + (8 - (new_h % 8))
|
||||||
|
new_y = round((video_h - new_h) / 2)
|
||||||
|
if new_y % 2 != 0:
|
||||||
|
new_y -= 1
|
||||||
|
return f"crop={video_w}:{new_h}:0:{new_y}", best_match['name']
|
||||||
|
if abs(h - video_h) < 16:
|
||||||
|
new_w = round(video_h * best_match['ratio'])
|
||||||
|
if new_w % 8 != 0:
|
||||||
|
new_w = new_w + (8 - (new_w % 8))
|
||||||
|
new_x = round((video_w - new_w) / 2)
|
||||||
|
if new_x % 2 != 0:
|
||||||
|
new_x -= 1
|
||||||
|
return f"crop={new_w}:{video_h}:{new_x}:0", best_match['name']
|
||||||
|
return f"crop={w}:{h}:{x}:{y}", None
|
||||||
|
|
||||||
|
def _cluster_crop_values_cropdetect(crop_counts, tolerance=8):
|
||||||
|
clusters = []
|
||||||
|
temp_counts = crop_counts.copy()
|
||||||
|
while temp_counts:
|
||||||
|
center_str, _ = temp_counts.most_common(1)[0]
|
||||||
|
try:
|
||||||
|
_, values = center_str.split('=');
|
||||||
|
cw, ch, cx, cy = map(int, values.split(':'))
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
del temp_counts[center_str]
|
||||||
|
continue
|
||||||
|
cluster_total_count = 0
|
||||||
|
crops_to_remove = []
|
||||||
|
for crop_str, count in temp_counts.items():
|
||||||
|
try:
|
||||||
|
_, values = crop_str.split('=');
|
||||||
|
w, h, x, y = map(int, values.split(':'))
|
||||||
|
if abs(x - cx) <= tolerance and abs(y - cy) <= tolerance:
|
||||||
|
cluster_total_count += count
|
||||||
|
crops_to_remove.append(crop_str)
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
continue
|
||||||
|
if cluster_total_count > 0:
|
||||||
|
clusters.append({'center': center_str, 'count': cluster_total_count})
|
||||||
|
for crop_str in crops_to_remove:
|
||||||
|
del temp_counts[crop_str]
|
||||||
|
clusters.sort(key=lambda c: c['count'], reverse=True)
|
||||||
|
return clusters
|
||||||
|
|
||||||
|
def _parse_crop_string_cropdetect(crop_str):
|
||||||
|
try:
|
||||||
|
_, values = crop_str.split('=');
|
||||||
|
w, h, x, y = map(int, values.split(':'))
|
||||||
|
return {'w': w, 'h': h, 'x': x, 'y': y}
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _calculate_bounding_box_cropdetect(crop_keys):
|
||||||
|
min_x = min_w = min_y = min_h = float('inf')
|
||||||
|
max_x = max_w = max_y = max_h = float('-inf')
|
||||||
|
for key in crop_keys:
|
||||||
|
parsed = _parse_crop_string_cropdetect(key)
|
||||||
|
if not parsed:
|
||||||
|
continue
|
||||||
|
w, h, x, y = parsed['w'], parsed['h'], parsed['x'], parsed['y']
|
||||||
|
min_x = min(min_x, x)
|
||||||
|
min_y = min(min_y, y)
|
||||||
|
max_x = max(max_x, x + w)
|
||||||
|
max_y = max(max_y, y + h)
|
||||||
|
min_w = min(min_w, w)
|
||||||
|
min_h = min(min_h, h)
|
||||||
|
max_w = max(max_w, w)
|
||||||
|
max_h = max(max_h, h)
|
||||||
|
if (max_x - min_x) <= 2 and (max_y - min_y) <= 2:
|
||||||
|
return None
|
||||||
|
bounding_crop = f"crop={max_x - min_x}:{max_y - min_y}:{min_x}:{min_y}"
|
||||||
|
return bounding_crop
|
||||||
|
|
||||||
|
def _analyze_video_cropdetect(input_file, duration, width, height, num_workers, significant_crop_threshold, min_crop, debug=False):
|
||||||
|
num_tasks = num_workers * 4
|
||||||
|
segment_duration = max(1, duration // num_tasks)
|
||||||
|
tasks = [(i * segment_duration, input_file, width, height) for i in range(num_tasks)]
|
||||||
|
crop_results = []
|
||||||
|
with _multiprocessing_cropdetect.Pool(processes=num_workers) as pool:
|
||||||
|
results_iterator = pool.imap_unordered(_analyze_segment_cropdetect, tasks)
|
||||||
|
for result in results_iterator:
|
||||||
|
crop_results.append(result)
|
||||||
|
all_crops_with_ts = [crop for sublist in crop_results for crop in sublist]
|
||||||
|
all_crop_strings = [item[0] for item in all_crops_with_ts]
|
||||||
|
if not all_crop_strings:
|
||||||
|
return None
|
||||||
|
crop_counts = _Counter_cropdetect(all_crop_strings)
|
||||||
|
clusters = _cluster_crop_values_cropdetect(crop_counts)
|
||||||
|
total_detections = sum(c['count'] for c in clusters)
|
||||||
|
significant_clusters = []
|
||||||
|
for cluster in clusters:
|
||||||
|
percentage = (cluster['count'] / total_detections) * 100
|
||||||
|
if percentage >= significant_crop_threshold:
|
||||||
|
significant_clusters.append(cluster)
|
||||||
|
for cluster in significant_clusters:
|
||||||
|
parsed_crop = _parse_crop_string_cropdetect(cluster['center'])
|
||||||
|
if parsed_crop:
|
||||||
|
_, ar_label = _snap_to_known_ar_cropdetect(
|
||||||
|
parsed_crop['w'], parsed_crop['h'], parsed_crop['x'], parsed_crop['y'], width, height
|
||||||
|
)
|
||||||
|
cluster['ar_label'] = ar_label
|
||||||
|
else:
|
||||||
|
cluster['ar_label'] = None
|
||||||
|
if not significant_clusters:
|
||||||
|
return None
|
||||||
|
elif len(significant_clusters) == 1:
|
||||||
|
dominant_cluster = significant_clusters[0]
|
||||||
|
parsed_crop = _parse_crop_string_cropdetect(dominant_cluster['center'])
|
||||||
|
snapped_crop, ar_label = _snap_to_known_ar_cropdetect(
|
||||||
|
parsed_crop['w'], parsed_crop['h'], parsed_crop['x'], parsed_crop['y'], width, height
|
||||||
|
)
|
||||||
|
parsed_snapped = _parse_crop_string_cropdetect(snapped_crop)
|
||||||
|
if parsed_snapped and parsed_snapped['w'] == width and parsed_snapped['h'] == height:
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
return snapped_crop
|
||||||
|
else:
|
||||||
|
crop_keys = [c['center'] for c in significant_clusters]
|
||||||
|
bounding_box_crop = _calculate_bounding_box_cropdetect(crop_keys)
|
||||||
|
if bounding_box_crop:
|
||||||
|
parsed_bb = _parse_crop_string_cropdetect(bounding_box_crop)
|
||||||
|
snapped_crop, ar_label = _snap_to_known_ar_cropdetect(
|
||||||
|
parsed_bb['w'], parsed_bb['h'], parsed_bb['x'], parsed_bb['y'], width, height
|
||||||
|
)
|
||||||
|
parsed_snapped = _parse_crop_string_cropdetect(snapped_crop)
|
||||||
|
if parsed_snapped and parsed_snapped['w'] == width and parsed_snapped['h'] == height:
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
return snapped_crop
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def detect_autocrop_filter(input_file, significant_crop_threshold=5.0, min_crop=10, debug=False):
|
||||||
|
if not _check_prerequisites_cropdetect():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
probe_duration_args = [
|
||||||
|
'ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1',
|
||||||
|
input_file
|
||||||
|
]
|
||||||
|
duration_str = subprocess.check_output(probe_duration_args, stderr=subprocess.STDOUT, text=True)
|
||||||
|
duration = int(float(duration_str))
|
||||||
|
probe_res_args = [
|
||||||
|
'ffprobe', '-v', 'error',
|
||||||
|
'-select_streams', 'v',
|
||||||
|
'-show_entries', 'stream=width,height,disposition',
|
||||||
|
'-of', 'json',
|
||||||
|
input_file
|
||||||
|
]
|
||||||
|
probe_output = subprocess.check_output(probe_res_args, stderr=subprocess.STDOUT, text=True)
|
||||||
|
streams_data = json.loads(probe_output)
|
||||||
|
video_stream = None
|
||||||
|
for stream in streams_data.get('streams', []):
|
||||||
|
if stream.get('disposition', {}).get('attached_pic', 0) == 0:
|
||||||
|
video_stream = stream
|
||||||
|
break
|
||||||
|
if not video_stream or 'width' not in video_stream or 'height' not in video_stream:
|
||||||
|
return None
|
||||||
|
width = int(video_stream['width'])
|
||||||
|
height = int(video_stream['height'])
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
return _analyze_video_cropdetect(input_file, duration, width, height, max(1, os.cpu_count() // 2), significant_crop_threshold, min_crop, debug)
|
||||||
|
|
||||||
|
def main(no_downmix=False, autocrop=False):
|
||||||
check_tools()
|
check_tools()
|
||||||
|
|
||||||
current_dir = Path(".")
|
current_dir = Path(".")
|
||||||
|
|
||||||
# Check if there are any MKV files to process before creating directories
|
|
||||||
files_to_process = sorted(
|
files_to_process = sorted(
|
||||||
f for f in current_dir.glob("*.mkv")
|
f for f in current_dir.glob("*.mkv")
|
||||||
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-"))
|
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-"))
|
||||||
)
|
)
|
||||||
|
|
||||||
if not files_to_process:
|
if not files_to_process:
|
||||||
print("No MKV files found to process. Exiting.")
|
print("No MKV files found to process. Exiting.")
|
||||||
return # Exit without creating directories
|
return
|
||||||
|
|
||||||
# Only create directories when we actually have files to process
|
|
||||||
DIR_COMPLETED.mkdir(exist_ok=True, parents=True)
|
DIR_COMPLETED.mkdir(exist_ok=True, parents=True)
|
||||||
DIR_ORIGINAL.mkdir(exist_ok=True, parents=True)
|
DIR_ORIGINAL.mkdir(exist_ok=True, parents=True)
|
||||||
DIR_LOGS.mkdir(exist_ok=True, parents=True)
|
DIR_LOGS.mkdir(exist_ok=True, parents=True)
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
files_to_process = sorted(
|
files_to_process = sorted(
|
||||||
f for f in current_dir.glob("*.mkv")
|
f for f in current_dir.glob("*.mkv")
|
||||||
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-"))
|
if not (f.name.endswith(".ut.mkv") or f.name.startswith("temp-") or f.name.startswith("output-"))
|
||||||
)
|
)
|
||||||
|
|
||||||
if not files_to_process:
|
if not files_to_process:
|
||||||
print("No more .mkv files found to process in the current directory. The script will now exit.")
|
print("No more .mkv files found to process in the current directory. The script will now exit.")
|
||||||
break
|
break
|
||||||
|
|
||||||
file_path = files_to_process[0]
|
file_path = files_to_process[0]
|
||||||
|
|
||||||
# Setup logging
|
|
||||||
log_file_path = DIR_LOGS / f"{file_path.name}.log"
|
log_file_path = DIR_LOGS / f"{file_path.name}.log"
|
||||||
log_file = open(log_file_path, 'w', encoding='utf-8')
|
log_file = open(log_file_path, 'w', encoding='utf-8')
|
||||||
original_stdout = sys.stdout
|
original_stdout = sys.stdout
|
||||||
original_stderr = sys.stderr
|
original_stderr = sys.stderr
|
||||||
sys.stdout = Tee(original_stdout, log_file)
|
sys.stdout = Tee(original_stdout, log_file)
|
||||||
sys.stderr = Tee(original_stderr, log_file)
|
sys.stderr = Tee(original_stderr, log_file)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
print("-" * shutil.get_terminal_size(fallback=(80, 24)).columns)
|
print("-" * shutil.get_terminal_size(fallback=(80, 24)).columns)
|
||||||
print(f"Starting full processing for: {file_path.name}")
|
print(f"Starting full processing for: {file_path.name}")
|
||||||
date = datetime.now()
|
date = datetime.now()
|
||||||
input_file_abs = file_path.resolve()
|
input_file_abs = file_path.resolve()
|
||||||
intermediate_output_file = current_dir / f"output-{file_path.name}"
|
intermediate_output_file = current_dir / f"output-{file_path.name}"
|
||||||
audio_temp_dir = None # Initialize to None
|
audio_temp_dir = None
|
||||||
created_ut_video_path = None
|
created_ut_video_path = None
|
||||||
created_encoded_video_path = None
|
created_encoded_video_path = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
audio_temp_dir = tempfile.mkdtemp(prefix="tv_audio_") # UUID is not strictly needed for uniqueness
|
audio_temp_dir = tempfile.mkdtemp(prefix="tv_audio_")
|
||||||
print(f"Audio temporary directory created at: {audio_temp_dir}")
|
print(f"Audio temporary directory created at: {audio_temp_dir}")
|
||||||
print(f"Analyzing file: {input_file_abs}")
|
print(f"Analyzing file: {input_file_abs}")
|
||||||
|
|
||||||
@@ -229,7 +437,16 @@ def main(no_downmix=False):
|
|||||||
], capture_output=True)
|
], capture_output=True)
|
||||||
media_info = json.loads(mediainfo_json)
|
media_info = json.loads(mediainfo_json)
|
||||||
|
|
||||||
created_ut_video_path, created_encoded_video_path = convert_video(file_path.stem, str(input_file_abs))
|
autocrop_filter = None
|
||||||
|
if autocrop:
|
||||||
|
print("--- Running autocrop detection ---")
|
||||||
|
autocrop_filter = detect_autocrop_filter(str(input_file_abs))
|
||||||
|
if autocrop_filter:
|
||||||
|
print(f" - Autocrop filter detected: {autocrop_filter}")
|
||||||
|
else:
|
||||||
|
print(" - No crop needed or detected.")
|
||||||
|
|
||||||
|
created_ut_video_path, created_encoded_video_path = convert_video(file_path.stem, str(input_file_abs), autocrop_filter=autocrop_filter)
|
||||||
|
|
||||||
print("--- Starting Audio Processing ---")
|
print("--- Starting Audio Processing ---")
|
||||||
processed_audio_files = []
|
processed_audio_files = []
|
||||||
@@ -352,7 +569,8 @@ def main(no_downmix=False):
|
|||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
import argparse
|
import argparse
|
||||||
parser = argparse.ArgumentParser(description="Batch-process MKV files with resumable video encoding and audio downmixing.")
|
parser = argparse.ArgumentParser(description="Batch-process MKV files with resumable video encoding and audio downmixing, with optional autocrop.")
|
||||||
parser.add_argument("--no-downmix", action="store_true", help="Preserve original audio channel layout.")
|
parser.add_argument("--no-downmix", action="store_true", help="Preserve original audio channel layout.")
|
||||||
|
parser.add_argument("--autocrop", action="store_true", help="Automatically detect and crop black bars from video using cropdetect.")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
main(no_downmix=args.no_downmix)
|
main(no_downmix=args.no_downmix, autocrop=args.autocrop)
|
||||||
|
|||||||
Reference in New Issue
Block a user