💻
Battle Programmer Micull
  • 🍞General Information
    • About Me
    • Career and Aspirations
  • 🤖AI
    • RAG Chatbot
    • Machine Learning Aimbot
  • 🔩Hardware
    • GameCube Controller LED Mod
    • Manipulating Controller Inputs
    • GameCube Mod
  • 📔Notes
    • Commonly Used Linux Commands
    • PortSwigger SQL Injection CheatSheet
    • eJPT/eCPPT Notes
  • 💾Hacking
    • CVE-2024-40502
    • Blind SQL Exploit
  • ⚙️Projects
    • Arch Linux Rice
    • Slippi Player Lookup
  • 🔒Security Documents
    • IIS Server Hardening
    • Web Application Penetration Test
    • Response Headers
  • 🐍Python
    • Pandas Vendor2 Export
    • Pandas Vendor1 Export
    • Pandas and AD
    • Python SFTP Script
Powered by GitBook
On this page
  • Anti-Cheat Engine Hardware Bypass
  • Spoofing A HID
  • Machine Learning Aimbot
  • The Math
  • Final Product
  1. AI

Machine Learning Aimbot

Undetectable hardware / machine learning based aimbot for overwatch2

PreviousRAG ChatbotNextGameCube Controller LED Mod

Last updated 10 months ago

Anti-Cheat Engine Hardware Bypass

Using an Arduino Leonardo disguised as a mouse poses significant challenges for anti-cheat engines to detect due to its ability to emulate human-like inputs. The Arduino Leonardo can be programmed to send keyboard and mouse signals to a computer, mimicking the actions of a legitimate user. This makes it difficult for anti-cheat software to distinguish between actual user inputs and those generated by the Arduino.

Additionally, since the device operates at a hardware level, it bypasses many software-based detection mechanisms. Anti-cheat systems typically monitor software interactions and processes running on the computer (unless you're Riot Vanguard), but the Arduino Leonardo, appearing as a standard HID (Human Interface Device), can operate below the radar of these software-based monitoring tools. Its versatility and programmability further enhance its capability to perform complex sequences of actions that would otherwise require human intervention, making it a potent tool for evading detection.

Spoofing A HID

This Arduino Leonardo code enables the device to function as a disguised mouse while allowing external programs to send input commands. It utilizes the HID and USB libraries to manage mouse inputs and interpret serial data for mouse movement. The MouseRptParser class handles mouse button events, updating the left, right, and middle button states. The setup function initializes the mouse and USB configurations, while the loop function continuously checks for serial input, processes mouse movements, and sends the appropriate HID reports to emulate mouse actions. This design allows the Arduino Leonardo to appear as a regular mouse to the system while executing programmed inputs from an external source.

Mouse.ino:

#include <hidboot.h>
#include <usbhub.h>
#include <Mouse.h>
#include <Wire.h>
#ifdef dobogusinclude
#include <spi4teensy3.h>
#endif
#include <SPI.h>
USB     Usb;
USBHub     Hub(&Usb);
HIDBoot < USB_HID_PROTOCOL_KEYBOARD | USB_HID_PROTOCOL_MOUSE > HidComposite(&Usb);
HIDBoot<USB_HID_PROTOCOL_MOUSE>    HidMouse(&Usb);

// Signed char can be between -128 to 127
int delta[2];
int negMax = -127;
int posMax = 127;

// Mouse
int lmb = 0;
int rmb = 0;
int mmb = 0;


class MouseRptParser : public MouseReportParser
{
  protected:
    void OnMouseMove(MOUSEINFO *mi);
    void OnLeftButtonUp(MOUSEINFO *mi);
    void OnLeftButtonDown(MOUSEINFO *mi);
    void OnRightButtonUp(MOUSEINFO *mi);
    void OnRightButtonDown(MOUSEINFO *mi);
    void OnMiddleButtonUp(MOUSEINFO *mi);
    void OnMiddleButtonDown(MOUSEINFO *mi);
};

void MouseRptParser::OnMouseMove(MOUSEINFO *mi)
{
  delta[0] = mi->dX;
  delta[1] = mi->dY;
};
void MouseRptParser::OnLeftButtonUp	(MOUSEINFO *mi)
{
  lmb = 0;
};
void MouseRptParser::OnLeftButtonDown	(MOUSEINFO *mi)
{
  lmb = 1;
};
void MouseRptParser::OnRightButtonUp	(MOUSEINFO *mi)
{
  rmb = 0;
};
void MouseRptParser::OnRightButtonDown	(MOUSEINFO *mi)
{
  rmb = 1;
};
void MouseRptParser::OnMiddleButtonUp	(MOUSEINFO *mi)
{
  mmb = 0;
};
void MouseRptParser::OnMiddleButtonDown	(MOUSEINFO *mi)
{
  mmb = 1;
};

MouseRptParser MousePrs;

void setup()
{
  Mouse.begin();
  Serial.begin( 115200 );
  Usb.Init();
  HidComposite.SetReportParser(1, &MousePrs);
  HidMouse.SetReportParser(0, &MousePrs);
}

void loop()
{
  delta[0] = 0;
  delta[1] = 0;
  Usb.Task();
  // Left Mouse
  if (lmb == 0){
    Mouse.release(MOUSE_LEFT);
  } else if (lmb == 1){
    Mouse.press(MOUSE_LEFT);
  }
  // Right Mouse
  if (rmb == 0){
    Mouse.release(MOUSE_RIGHT);
  } else if (rmb == 1){
    Mouse.press(MOUSE_RIGHT);
  }
  // Middle Mouse
  if (mmb == 0){
    Mouse.release(MOUSE_MIDDLE);
  } else if (mmb == 1){
    Mouse.press(MOUSE_MIDDLE);
  }

  if (Serial.available() > 0)
  {
    // Read Data
    String data = Serial.readStringUntil('x');

    // Gets demarcation between deltaX and DeltaY
    int ohHiMarc = data.indexOf(':');
    Serial.println(data);

    // DeltaX & DeltaY
    delta[0] = data.substring(0, ohHiMarc).toInt();
    delta[1] = data.substring(ohHiMarc + 1).toInt();

    handleX(delta[0]);
    handleY(delta[1]);
  } else{
    Mouse.move(delta[0], delta[1]);
  }
}

// Handle Moving of x 
void handleX(int dx){
  
  int spawns; 
  int remainder;
  
  if(dx < negMax)
  {
    // How many times we move mouse
    spawns = int(dx / negMax); 
    
    // How much we move after for loop
    remainder = int(dx % negMax);

    // Because we can only move 125 at a time,
    // we need a for loop to spawn multiple mouse events.
    for(int i = 0; i < spawns; i++)
    {
      Mouse.move(negMax , 0, 0);
    }
    // Move Remainder
    Mouse.move(remainder, 0, 0);
  } 
  else if (dx >= negMax && dx <= posMax)
  {
    Mouse.move(dx, 0, 0);
  }
  else if (dx > posMax)
  {
    // How many times we move mouse
    spawns = int(dx / posMax); 
    
    // How much we move after for loop
    remainder = int(dx % posMax);
    
    for(int i = 0; i < spawns; i++)
    {
      Mouse.move(posMax , 0, 0);
    }
    // Move Remainder
    Mouse.move(remainder, 0, 0);
  }
  
}

// Handle Moving of x 
void handleY(int dy){
  
  int spawns; 
  int remainder;
  // MindTrip, Neg is pos & Pos is Neg for move, hence the inverted pos & Neg
  if(dy < negMax)
  {
    // How many times we move mouse
    spawns = int(dy / negMax); 
    
    // How much we move after for loop. -1 converts to correct direction on arduino (pos,neg,neg,pos)
    remainder = int(dy % negMax);
    remainder *= -1;
    // Because we can only move 125 at a time,
    // we need a for loop to spawn multiple mouse events.
    for(int i = 0; i < spawns; i++)
    {
      Mouse.move(0, posMax, 0);
    }
    // Move Remainder
    Mouse.move(0, remainder, 0);
  } 
  else if (dy >= negMax && dy <= posMax)
  {
    dy *= -1;
    Mouse.move(0, dy, 0);
  }
  else if (dy > posMax)
  {
    // How many times we move mouse
    spawns = int(dy / posMax); 
    
    // How much we move after for loop
    remainder = int(dy % posMax);
    remainder *= -1;
    
    for(int i = 0; i < spawns; i++)
    {
      Mouse.move(0, negMax, 0);
    }
    // Move Remainder
    Mouse.move(0, remainder, 0);
  }
};

Machine Learning Aimbot

This Python script functions as an aimbot that interfaces with an Arduino Leonardo to automate aiming in a game. It uses the mss library to capture the screen and torch to load a YOLOv5 model for object detection, specifically trained to recognize enemy head hitboxes. When the user presses the Ctrl key, the script calculates the difference between the detected enemy's head position and the center of the screen. It then sends these coordinates to the Arduino Leonardo via serial communication to move the mouse cursor accordingly, effectively snapping the crosshair to the enemy's head. The script also uses the keyboard library to monitor key presses and the pyautogui library for additional input control. The detected enemy positions and the movement adjustments are displayed on the screen using OpenCV. The program runs continuously, updating in real-time and can be terminated by pressing the 'q' key.

Aimbot.py

import mss
import numpy as np
import cv2
import time
import keyboard
import pyautogui
import torch
import serial

ScreenSizeX = 1600
ScreenSizeY = 900
model = torch.hub.load('ultralytics/yolov5', 'custom', path='C:/Users/199x/Desktop/AIVision/yolov5-master/runs/train/exp7/weights/best.pt')
arduino = serial.Serial('COM7', 9600, timeout=0)


with mss.mss() as sct:
    monitor = {'top': 20, 'left':0, 'width':ScreenSizeX, 'height':ScreenSizeY}
while True:
    t = time.time()

    img = np.array(sct.grab(monitor))
    results = model(img)

    rl = results.xyxy[0].tolist()
    print(rl)
    #detection made
    if len(rl) > 0:
        #Confidence threshold
        if rl[0][4] > .35:
            #what kind of detection #15 == bot
            if rl[0][5] == 15:
                xmax = int(rl[0][2])
                width = int(rl[0][2] - rl[0][0])
                screenCenterX = ScreenSizeX / 2
                centerX = int((xmax - (width/2)) - screenCenterX)
                
                # Y INFO
                ymax = int(rl[0][1])
                height = int(rl[0][1] - rl[0][3])
                screenCenterY = ScreenSizeY / 2
                centerY = int(((ymax - (height/4)) - screenCenterY)) 

                # Change decimal as needed
                
                moveX = int(centerX * 3)
                moveY = int(centerY * 3.5)

                if centerY < screenCenterY:
                    moveY *= -1

                if keyboard.is_pressed('ctrl'):
                    print("center Y coords: " + str(centerY))
                    print("moveX: " + str(moveX))
                    print("moveY: " +str(moveY))
                    arduino.write((str(moveX) + ":" + str(moveY) + 'x').encode())


    cv2.imshow('s', np.squeeze(results.render()))
   #print('fps: {}'.format(1 / (time.time() - t)))
    cv2.waitKey(1) 
    if keyboard.is_pressed('q'):
        break
    
cv2.destroyAllWindows() 

The Math

Here is the diagram explaining the math behind the aimbot. It visually illustrates how the differences between the target position and the screen center (deltaX and deltaY) are calculated and used to adjust the crosshair position towards the target

  • Screen Center: (ScreenCenterX, ScreenCenterY)

  • Target Position: (TargetX, TargetY)

  • Delta Calculation:

    • deltaX = TargetX - ScreenCenterX

    • deltaY = TargetY - ScreenCenterY

These differences are then used to move the crosshair to align with the target.

Final Product

After training the bot on nearly 500 unique/ labeled images of training bots here is the result

🤖
Hardware Diagram
Page cover image