Writing a clicker bot in Kotlin for Lineage 2


Not all New Year’s salads were eaten yet, “The Irony of Fate” had already been viewed, and before the start of the working week there was still an eternity and it was necessary to come up with entertainment for the remaining holidays. Anticipating nostalgia, I opened Lineage 2, one of the most popular MMORPGs of the “zero” in the CIS space. However, I didn’t want to play myself anymore and the idea came up to automate this business. For details under the cat!

Introduction

In my school years, my friends and I played different MMORPG toys, but the most sticky for us was Lineage 2. The essence of the game is to repeat the same actions to kill monsters 80% of the time and, from time to time, fight with other players for those monsters. Unfortunately, I no longer have time for such classes, so it was decided to do automation! Just recently I came across an article on OpenCV, which inspired me to understand this topic a little – there the author determined the presence of a tomato in the picture 🙂

No sooner said than done! And now Google is already open in search of ready-made implementations, and what was my surprise that I immediately found a Habré release. My ideas coincided with the ideas of the author, only the code is written in Python .. (Post)
A small digression: I am a mobile developer who has never touched this Python of yours .. Unfortunately, in two evenings I was unable to run the code from the author’s repository. At first there were conflicts in the version of the python itself, then some incomprehensible errors with the libraries from the library, at the end of the AutoHotPy tool for working with the mouse and keyboard refused to work at all. As a result, it was decided to write our own implementation in Kotlin with blackjack and gnomes! (Dwarves are one of the races in Lienage 2)

Go!

To work with the game window, we will use the Java Native Access (JNA) open source lib. We create a new project in our favorite IDE, download two jars from github (https://github.com/java-native-access/jna) JNA and JNA Platform, put them in our project and don’t forget to include them using gradle:

implementation(files("lib/jna-5.12.1.jar"))
implementation(files("lib/jna-platform-5.12.1.jar"))

Defining the game window

Nothing complicated here, in the list of windows we find a window called Lineage, get its coordinates and make it active:

private fun detektWindow(windowName: String): Rectangle {
    val user32 = MyUser32.instance
    val rect = Rectangle(0, 0, 0, 0)
    var windowTitle = ""

    val windows = WindowUtils.getAllWindows(true)
    windows.forEach {
        if (it.title.contains(windowName)) {
            rect.setRect(it.locAndSize)
            windowTitle = it.title
        }
    }

    val tst: WinDef.HWND = user32.FindWindow(null, windowTitle)
    user32.ShowWindow(tst, User32.SW_SHOW)
    user32.SetForegroundWindow(tst)

    return rect
}

Finding a target

My idea was the same as that of the author from the mentioned article, however, some details did not work for me and I had to choose the implementation for myself. Our algorithm of actions will be as follows: we take a screenshot of the game, using OpenCV filtering we find the names of monsters, hover over them with the mouse, attack until the monster runs out of health, switch to the next monster, and so on ad infinitum! Fun, isn’t it? Let’s go!

We install OpenCV according to the guide from the off site, download and throw it into the project openCV-…jar and opencv_java..dll. Do not forget to connect them to the project via gradle

implementation(files("lib/opencv-460.jar"))  

Making a screenshot of the game window

Lineage 2 main window
Lineage 2 main window

The screenshot shows that in addition to the names of the monsters, we also have other white objects that can interfere: radar, chat, the name of our character, etc. To do this, we modify our screenshot and paint over the unwanted areas black:

painted over "unnecessary" interference
Painted “unnecessary” interference

This is where OpenCV comes into play. To start grind, we need to find targets. How we work – we need to filter so that only white rectangular objects remain on the screen (this is how the names of monsters look like). The idea is the following, we remember that the picture is made up of pixels, so we perform a threshold transformation of all the pixels in the picture so that only white pixels get there:

Imgproc.threshold(source, source, 252.0, 255.0, Imgproc.THRESH_BINARY)

next, you need to perform several formological transformations (blurring and stretching), which once again filter all white objects according to the specified sizes and convert them into white rectangles by blurring and stretching (more detailed description in off. dock)

Filtered white rectangles
Filtered white rectangles
private fun findPossibleTargets(rectangle: Rectangle): List<MatOfPoint> {
    val capture: BufferedImage = Robot().createScreenCapture(rectangle)
    fillBlackExcess(capture, rectangle)

    val source: Mat = img2Mat(capture)

    Imgproc.cvtColor(source, source, Imgproc.COLOR_BGR2GRAY)
    Imgproc.threshold(source, source, 252.0, 255.0, Imgproc.THRESH_BINARY)
    val kernel = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, Size(10.0, 1.0))
    Imgproc.morphologyEx(source, source, Imgproc.MORPH_CLOSE, kernel)
    Imgproc.erode(source, source, kernel)
    Imgproc.dilate(source, source, kernel)

    val points: MutableList<MatOfPoint> = mutableListOf()
    Imgproc.findContours(source, points, Mat(), Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE)

    return points
        .sortedBy { it.toList().maxBy { it.y }.y }
        .filter {
        val maxX = it.toList().maxBy { it.x }.x
        val minX = it.toList().minBy { it.x }.x
        val width = (maxX - minX)

        val maxY = it.toList().maxBy { it.y }.y
        val minY = it.toList().minBy { it.y }.y

        val height = (maxY - minY)

        width > 30 && width < 200 && height < 30
    }
}

Object Comparison

Okay, we learned how to hover, now we need to determine whether the mouse is on a monster or on some kind of flora object.

Because The flora and fauna of the Lineage 2 world is quite diverse, we need to make sure that the white rectangle is our desired target in the form of a monster, and not some kind of white wall or grass. To do this, we take a screenshot again, get our template, turn both pictures into gray and use the matchTemplate method from OpenCV.

It works approximately as follows: our template image is sequentially superimposed on the original image and a correlation is calculated between them, we get the result as a value from 0.0 to 1.0. (More details about the method at the dock).

PS for those who will try the implementation – please note that monsters will have different templates on different chronicles and servers, so you will have to prepare these images yourself.

HP bar comparison
HP bar comparison
private fun isMouseSelectingAMob(rectangle: Rectangle): Boolean {
    Thread.sleep(100L)
    val minMatchThreshold = 0.8
    val capture: BufferedImage = Robot().createScreenCapture(rectangle)

    val thresholdScreen: Mat = img2Mat(capture)
    Imgproc.cvtColor(thresholdScreen, thresholdScreen, Imgproc.COLOR_BGR2GRAY)

    val template: Mat = Imgcodecs.imread("./src/main/resources/$TARGET_TEMPLATE_NAME.png")
    Imgproc.cvtColor(template, template, Imgproc.COLOR_BGR2GRAY)

    Imgproc.matchTemplate(thresholdScreen, template, thresholdScreen, Imgproc.TM_CCOEFF_NORMED)
    val value = Core.minMaxLoc(thresholdScreen).maxVal

    return value > minMatchThreshold
}

Mouse/keyboard emulation

First we need to learn how to emulate mouse movement and keyboard clicks. Unfortunately, I did not find a ready-made library in Java / Kotlin, so we will use the one written in C language called Interception (https://github.com/oblitum/Interception). Here I remember that I am a mobile developer and do not know how to C, but I quickly cheer up because it turned out to be quite simple to write a wrapper in Kotlin. We install according to the guide, throw the files interception.dll and interception.h into the project. Interception works in a separate thread, completely intercepts control over the mouse and keyboard, emulates movement and pressing with the help of commands, however, in order to return control back, we need to explicitly register this, set a specific button, otherwise we will have to restart the entire computer 🙂

override fun run() {
        var device: Int
        while (Interception.interception_receive(
                context,
                Interception.interception_wait(context).also { device = it },
                emptyStroke,
                1
            ) > 0
        ) {

            val strokeCode = emptyStroke.code
            val keyboardEscShort = INTERCEPTION_FILTER_KEYBOARD_ESC.toShort()

            if (device == KEYBOARD_DEVICE_ID && strokeCode == keyboardEscShort) {
                println("finish program")
                exitProcess(0)
            }

            if (!emptyStroke.isInjected) {
                Interception.interception_send(context, device, emptyStroke, 1)
            }
        }
        Interception.interception_destroy_context(context)
    }

Of the interesting – the mouse and keyboard have their own IDs, by which commands are received and sent. You can pick them up by simple enumeration for the mouse, the value is from 11 to 20, and for the keyboard, from 1 to 10. I noticed that, from time to time, their IDs can change, although I did not physically rearrange them to other ports. If one of the readers knows why this happens, please tell us in the comments 🙂

Movement

Okay, we learned how to move the mouse and press the keys, but our movement looks like teleportation and may cause suspicion among the server admins. The author of the mentioned article correctly noted that there is the so-called Bresenham Algorithm, which, although not completely, still at least slightly resembles human movement. We take the implementation from Wiki, we translate into Kotlin, we slightly modify the movement, because Interception moves the mouse not by absolute value, but by relative current mouse location.

Recognizing the amount of health of a monster

We learned how to find objects and attack them, but we need to understand when we are done with the current monster and we need to take on the next one. To do this, we need to determine the remaining health of the monster. This trick has two steps. At the first stage, after we have chosen our target, we also use a special template to find a rectangle that refers to the window with the health of the monster, so we will omit the details.

Health window
Health window

Let’s take a closer look at the second step. Because the monster health window contains a strip of several colors, where one of them is red (the current remaining health of the monster) and brown (the lost health of the monster), then we can find the difference and understand whether the monster is still alive or not. To do this, we specify the lower and upper values ​​of the red color, use the inRange function to filter by the desired color interval and find all objects using contour analysis:

private fun checkHpBar(hpBarMat: Mat): Int {    
    val lower = Scalar(0.0, 150.0, 90.0)    
    val upper = Scalar(10.0, 255.0, 255.0)
    val subImageMat: Mat = hpBarMat.clone()    
    Imgproc.cvtColor(subImageMat, subImageMat, Imgproc.COLOR_BGR2HSV)    
    Core.inRange(subImageMat, lower, upper, subImageMat)    
    val remainingContours: MutableList<MatOfPoint> = mutableListOf()    
    Imgproc.findContours(subImageMat, remainingContours, subImageMat, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE)
    val remainingLeftX = remainingContours.firstOrNull()?.toList()?.minBy { it.x }?.x ?: 0.0    
    val remainingRightX = remainingContours.firstOrNull()?.toList()?.maxBy { it.x }?.x ?: 0.0
    val totalHpBarWidth = subImageMat.width()    
    val remainingHpBarWidth = remainingRightX - remainingLeftX    
    val percentHpRemaining = remainingHpBarWidth * 100 / totalHpBarWidth

    if (percentHpRemaining < 1) return 0

    return percentHpRemaining.toInt()
}

More details about inRange and findCountours.

Ready! We have all the necessary functions, it remains to code the algorithm.
Our actions:

  1. looking for a monster by searching for all white rectangles

  2. check if he is alive, if not – return to point number 1

  3. start attacking

  4. when the monster’s health is zero – go back to point number 1

Repeat until you get bored!

The result of our work on YouTube:

Link to sources

Conclusion

It is clear that the clicker turned out to be not ideal and any external influence on the character will completely ruin our code, but I did not have a goal to write the most optimal bot. Using OpenCV allows you not only to identify tomatoes and find monsters in MMORPGs, but also opens up a huge scope for applications in various fields, limited only by the imagination. My goal was to understand the basic things and apply with an example. The next step would be to try modern machine vision using neurons, but this is already sometime next time. Share in the comments in what other unusual areas computer vision could be used 🙂

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *