【YOLOv8】姿態(動作)識別_俯臥撐計數

 



用 Ultralytics YOLOv8 Pose 模型(yolov8x-pose.pt)搭配 AIGym 解決方案模組,對影片中的人物進行姿態辨識與伏地挺身(push-up)動作計數。


  • up_angle:如果角度超過這個值,代表身體在「上推」階段
  • down_angle:如果角度低於這個值,代表身體在「下壓」階段

kpts=[5, 7, 9],分別是左肩(left shoulder)、左肘(left elbow)、左手腕(left wrist)
用這三個點計算手臂夾角,以判斷 push-up 是否完成一個動作。
偵測深蹲的話kpts 就可以類似改成 [11,13,15]


https://github.com/Alimustoofaa/YoloV8-Pose-Keypoint-Classification?tab=readme-ov-file
測試程式

import cv2
from ultralytics import solutions
MODEL_PATH = "yolov8x-pose.pt"  #yolov8x-pose.pt , yolo11n-pose.pt
VIDEO_PATH = "fuwocheng.mp4"
gym = solutions.AIGym(
    model=MODEL_PATH,
    kpts=[5, 7, 9],         # 指定關鍵點:左肩-左肘-左手
    up_angle=100,
    down_angle=80,
    line_width=2,
    show=False
)

cap = cv2.VideoCapture(VIDEO_PATH)
if not cap.isOpened():
    print("Error: Could not open video.")
    exit()

# ===== 新增:控制視窗大小與位置 =====
window_name = "Processed Frame"
cv2.namedWindow(window_name, cv2.WINDOW_NORMAL)
cv2.resizeWindow(window_name, 640, 480)  # 視窗大小
cv2.moveWindow(window_name, 200, 100)    # 視窗在螢幕的位置
# ====================================

while cap.isOpened():
    success, frame = cap.read()
    if not success:
        break

    results = gym.process(frame)
    processed_frame = results.plot_im

    cv2.imshow(window_name, processed_frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
print("Video processing completed.")


效果








Ref:
https://github.com/ultralytics/ultralytics/blob/main/ultralytics/solutions/ai_gym.py


留言

這個網誌中的熱門文章

何謂淨重(Net Weight)、皮重(Tare Weight)與毛重(Gross Weight)

(2021年度)駕訓學科筆試準備題庫歸納分析_法規是非題

經得起原始碼資安弱點掃描的程式設計習慣培養(五)_Missing HSTS Header