Web-API [v4] - AdminPanel & RESTful web server [now with screenshots!]

Please note, discussion for this plugin has been moved to its Ore discussion thread.

Web-API

Web-API is a RESTful API to access & manage your Sponge Minecraft server.
It also offer an AdminPanel to manage your server through a web interface.

Default server runs on localhost port 8080.

Features

Provides an AdminPanel to manage your minecraft server and a RESTful webserver with:

  • General information about the server
  • History for
    • Chat
    • Commands
  • WebHooks that execute
    • on minecraft events (like players joining, chat, death, etc.)
    • when calling certain command (supports parameters)
    • on custom events (e.g. from other plugins)
  • List, inspect, manipulate, create, delete:
    • Players
    • Worlds
    • Entities
    • Tile entities
    • Plugins
    • Blocks
  • Information about loaded classes
  • Arbitrary command execution (like using the server console)
  • Custom serializers to determine how data is turned into JSON
  • Permissions for each endpoint

AdminPanel screenshots

Login

Dashboard

Commands

Entities

Worlds

Map

Integrations

This plugins offers integrations (provides an API endpoint and/or a page in the admin panel) for the following plugins:

Source

Can be found on Github

Downloads

Are available here

Docs / Tutorials

I started working on some short tutorials which you can find here

Support

You can leave a message in the channel here, private message me on the forums here, message me on discord @ Valandur#3581 or join the Web-API discord server with this invite link

External connections

Web-API uses bStats to track plugin usage (Starting with v4.4.0)
It also contains sentry.io, which automatically reports errors. Error reports do NOT contain your server ip or any othe personal information.

Both these features can be turned off; bStats has it’s own config file and sentry can be turned off in the main config file config.conf by setting reportErrors = false

Issues

Please report them here

Donations

Are obviously not required but very appreciated, can be done here

3 Likes

Great :smiley: Do you plan to implement features to interact with server ?

Does it provide JSON feedback ? As an example, if I remotely run “/money Keuterio”, will I get "{callback: “9000 bucks”} ?

I’m not sure what you mean with “interact”.
If you use them /cmd endpoint you can execute any command, so for example you could send the json { cmd: "money Keuterino" } and what the API does is it sends any response generated by the server as a json { response: [ "9000 bucks" ] } (the response is an array since the server might send multiple messages)
This is a general implementation for any command though, because I can’t know what specific commands certain plugins might implement, so you would still have to kind of “parse” the json response yourself, to a certain extent.
I could implement some of the minecraft/sponge commands with detailed callback/information.

1 Like

“Interacting” would be getting a chest and its content then modifying it. But having callback is already great :slight_smile:

In fact, I have requested a plugin 4 months ago and what you did is almost what I am looking for. So thank you and I can’t wait to see the future of Wep-API.

Ohhh now I see what you mean.
That is definitely a cool idea, I’ll look into how it can be done cleanly with endpoints.
I also had the idea of having an endpoint for socket.io, where you could subscribe to events like chat messages or player joins.

You run an internal nodeJS server ?
I had some trouble trying to run one, that would be great by the way.

No I’m using Jetty, but as far as I know Jetty supports web sockets, so I should be able to get that set up.

I’ve got entities (mobs & such) working now:
List with /entity
Details with /entity/<entity-uuid>.

And I’ve also got tile entities (chests & furnance & co) working:
List with /tile-entity or /tile-entity/<world-name-or-uuid> (to filter by world)
Details with /tile-entity/<world-name-or-uuid>/<x>/<y>/<z> (I might change this get path later, kinda weird…)

But they’re not documented yet, I’ll get to that shortly.
Also no interaction is possible yet (meaning you can only look at the stats/contents but not modify it yet)

I had a look over your source and I’m concerned by the absolute lack of thread management which is going on. You are accessing resources from multiple threads which may work under some circumstances, but is almost certain to explode once the number of requests/server load exceeds even the most basic thresholds.

Examples:

  • No marshalling of consumed chat messages. Chat messages are written to a non-thread-safe collection (specifically ArrayList) which is returned “bare” to your servlet on another thread. If a chat message arrives whilst your servlet is crawling the list then it’s going to explode with a CME.

    This is exacerbated by the fact that you are doing no tombstoning/size limiting of the chatMessages list, so it will grow forever until your server runs out of heap space (unlikely but still a massive waste of memory) plus the fact that if your servlet tries to return 1GB of chat logs in a single response then (A) something will explode and (B) the longer the servlet takes to consume the chat log, the more likely the above CME becomes.

  • No marshalling of dispatched commands, you dispatch commands in the jetty thread which has nondeterministic behaviour. Commands should never be dispatched anywhere but the main thread.

Suggestions:

  • Marshal chat messages via a thread-safe bounded queue, or at least lock the original list and return a copy of the contents when requested. Limit the length of the chat message list by cleaning entries by age or limiting queue size. You can use ConcurrentLinkedQueue for the former (intrinsic thread-safe) or use LinkedList (which implements both List and Deque (aka double-ended queue, pronounced “deck”)) and use a monitor lock to control access to it (explicit thread-safe).

  • Marshal commands onto the main thread via the scheduler or pump across threads using your own ConcurrentLinkedQueue.

My guess is that you’re new to writing multi-threaded code so this is intended as constructive feedback. It is most likely that none of the issues I’ve mentioned have bitten you on the arse … yet … and I would guess that you’re testing in a limited environment where you’ve either been lucky enough to avoid a CME by low volume of calls, and/or that one or two have occurred but it doesn’t look repeatable enough to worry about or - worse - is just swallowed somewhere by the runtime.

The general approach when writing multi-threaded code is to treat all interactions between threads with respect. This means that you either do what’s known as affectionately as “pumping” messages from one thread to the other via a thread-safe intermediary - eg. a thread-safe queue - or use monitor locks to let both threads access the same data but only one at a time. Ignoring the problem and hoping it will go away is not a viable approach unfortunately, oh that wishes could come true of course.

A good attitube to have when writing threaded code is to do the following:

  • First, try to scare yourself absolutely shitless about the world exploding if your threads screw each other over
  • Once you’ve recovered from the shock, have a cup of tea or something to help calm you down
  • Panic because you realise that you haven’t actually fixed the problem yet, you’ve been too busy drinking tea
  • Begin writing code, safe in the knowledge that the paranioa thus acquired will help you write thread safe code
  • Ship your application
  • Cry in the corner as CME-related bugs trickle in anyway
  • Die miserable and alone, realising that nobody ever truly understood your threads and lamenting the fact that you didn’t either.

Some key observations being:

  1. If you aren’t scared and paranoid when writing threaded code, you’re doing it wrong
  2. See point 1

As I see it there are two possibilities:

  1. You didn’t actually realise you were writing threaded code, and therefore didn’t account for it. This is understandable because if you’ve been used to writing plugins in the past where everything is intrinsiscally thread-safe, you may have assumed that jetty would be more of the same, with everything originating from the same thread and therfore you can act with impugnity. This is a perfectly understandable assumption but unfortunately not true.
  2. You realised that the code was threaded, but hadn’t appreciated that the issues I’m describing could occur, either believing that the sponge would take care of the problems for you internally, or that the chances of a problem occurring were very small and could therefore be ignored, this is also not true.

The practical upshot is this:

  • Your plugin is sinking events on the “main” thread, this is fine. You are pushing chat messages into a local memory store which is fine but needs some size limiting, this is a simple issue to fix.
  • You are spinning up a jetty server which of course operates on its own thread and so requests will be handled on one of jetty’s threads.
  • Problems arise because a collection which is being written by one thread (the main thread) is then being accessed in a non-threadsafe way from arbitrary (potentially multiple simultaneous) jetty threads. If one of those threads is iterating over the list when a new chat message arrives, then the jetty thread is going to throw a CME.
  • You are also dispatching commands from arbitrary jetty threads. The problem here is that sending commands is not thread safe either, the SendCommandEvent thus raised is going to be raised on your arbitrary jetty thread and thus any plugin sinking that event could cause corruption if it accesses things which are gamestate, since gamestate must only ever be modified on the main thread.

I would recommed the following:

  • Learn about thread-safe queues in Java, of particular interest should be ConcurrentLinkedQueue as it’s a handy two-lock queue implementation which is good for marshalling
  • Learn about monitor locks and the synchronized keyword in Java

If you require any further advice on how to address your problems, feel free to ask in #spongedev

6 Likes

I strongly recommend that you stop and figure out a proper marshalling solution. You simply can not do what you are doing here (for example) safely from an arbitrary thread.

The general approach (since jetty is gonna require it will be something along the lines of:

  • When an incoming jetty request is handled, bundle the request into some kind of marshalling object, eg. EntityDataRequest
  • acquire the object’s intrinsic lock
  • offer the object to a thread-safe marshalling queue (eg. a ConcurrentLinkedQueue)
  • wait on the object - this will release the object’s intrinsic lock and wait the jetty thread on it
  • periodically (eg. once per tick) remove waiting jobs from the queue in the main thread, process them, push the result into the marshalling object and then call notify on the object.
  • consume the returned data in your jetty thread
  • send response

Now fortunately you don’t need to do ant of this yourself, Sponge has a full-blown scheduler which will do all this for you! When you schedule your event you will recieve a promise from the scheduler which your jetty thread can wait on (effectively waiting on the intrinsic lock of the future itself) and can also use to retrieve the result!

It is vital that you fix this core problem with your plugin before writing any further functionality.

1 Like

I see your point with the chat messages. I wasn’t thinking of this project on a large scale yet, as I just personally use it for a small case, so I have only briefly thought about memory usage. I will probably also have to paginate certain endpoints, because returning information about 1000 entities is too much.

As for the threads - I have worked with threads before, but in a more explicit context (meaning if I wanted a seperate thread I had to start it, I wasn’t thinking of the fact that jetty would start it’s own threads)

That being said I can now see the problem with the chat messages and commands.

About the code you linked with the worlds - I thought I read somewhere in the sponge docs that it said “assume all objects to be copies”, so I thought using the array of worlds would not be a problem.

If Sponge offers a scheduler I will most certainly be using that.
Thanks for the input

Web servers and in fact any network server will in general have to spin up multiple threads, usually in a certain-sized pool which is effectively the maximum number of requests which can be handled simultaneously. If you have 3 clients accessing the server at once it wouldn’t make sense that each request was handled one after the other - though this kind of “pipelined” approach is certainly valid if responsiveness of the endpoint is not important - so in that case you will have 4 threads: a “listener” thread which is sitting there waiting for new incoming connections and marshalling new requests to one of the pool threads, and three “worker” threads which handle the requests.

If you have configured a web server to accept at most 10 concurrent connections then it’s expected that the “listener” thread will simply block and accept no new connections until one of the worker threads completes and is released.

Essentially it means that you should assume that all requests happen on a unique thread.

This would certainly make sense for most objects, since immutable objects like a player UUID are intrinsically thread safe, other objects might be shallow copies such as itemstacks, whilst some objects simply can’t be treated as thread safe such as players and worlds. It would simply be astonishingly unperformant to deep-copy objects like worlds and so it’s generally unsafe to assume that you can perform operations on them outside the “main” thread.

Apart from NodeJS which only uses one thread, which is what I have been using a lot lately. But that would also explain why I forgot to account for multiple threads.

So do you think I should perform all operations in the main thread? Or use some sort of thread-safe queue (which you mentioned earlier) to access complex objects?

oh god…i dont even.

I’m sorry, I don’t quite understand what you’re trying to say?

Well even if your jetty process only had one thread, it’s still a different thread to the game itself so you still have two threads even if all incoming requests are pumped sequentially, so marshalling still needs to occur regardless. Essentially, the fact that this call is nonblocking is already enough of a clue that the webserver is spinning up a new thread.

You have lots of choices, you can use a hybrid model where you use locking to share some resources (the chat log is a good example of this, a collection which is lock-on-write would be ideal) or you can marshal requests but cache responses for a certain time so that (for example) multiple calls to your entity endpoint within 1 second will only result in 2 cross-thread marshalling operations followed by caching the result in an immutable (and therefore intrinsically thread-safe) struct of some kind. You can choose to spin up a scheduler task for each request or alternatively have your plugin spin up a single “synchronous” recurring task which acts as your dispatcher and then marshal to it via ConcurrentLinkedQueue and have the marshallable request be its own Future.

Basically there are lots of choices but you should settle on an architecture and get it working first, and then fit all your requests inside that arch. If you’re smart, you’ll make the request/response/promise/future object itself be the response so that you can serialise it straight to json using gson.

Hi, did you have time to make some changes ?

I have indeed made some changes. I was able to address the problems @mumfrey mentioned (at least partly, the solution I found doesn’t completely satisfy me yet)
Sadly I didn’t have enough time to make a new release yet. That should be done by the end of the week.

I’m still looking at efficient ways to access the server data without having to use the main thread, especially for read operations. Which means the plugin might not be as performant as one might hope. I’ll have to run some tests to verify that.

1 Like

That some great news, take your time :slight_smile:

Unfortunately you can’t, the only way to achieve a threaded read would be to periodically pump a copy of the world data into a duplicate, thread-safe, space. Of course this would potentially double the memory footprint.

A possible approach would be to precache certain expensive lookups (like entities) by having workers on the main thread which refreshed every 10 seconds or so, and then in the API endpoint return how “stale” the response is (in ms) but it means you’re running the queries even when no requests are happing, are having to return stale data, and might consume a lot of unncessary CPU cyles just generating information in case it’s requested.

In the first instance you should work on marshalling requests to the main thread and returning the responses as I described above. Look at performance once you have a stable working solution.

If you need any assistance or have any questions with respect to the threading please don’t hesistate to ask.

1 Like

There are currently two security features in place.
The first one being that the server usually only runs on localhost, meaning there is no way to attack it without having access to the underlaying server.
The second one is a simple token based authentication, although not yet randomly generated.

I’m not sure if that’s what you were talking about…