Plugin Hosting

I think that the third proposal would be good, given some caveats. The first iteration of a new plugin should require code review by a moderator / expert to ensure that it’s up to par. After that point, subsequent versions would have to be passed through an automatic code analyzer to check for issues. We would need a specific format that all plugins must follow (similar to PR guidelines for Sponge) so that a program can scan for issues. It would have to build and test on a server without throwing any severe errors. There would also need to be some kind of security check for malicious code. It should be a little zealous (making some false positives) and marking each for more indepth review. If a person can look over the warning and there are no actual issues, the system can be adjusted to whitelist those errors to prevent catching it again in the future and the plugin will be cleared. Those are just some thoughts. I’m sure, since I’m not a professional Java developer, I am missing some key detail. But something like this would be good to think about for efficiency.


I tagged @Tux2 so I mean his idea.

1 Like

I vote for Proposal 3 - and I agree with @Tux2 's idea of a reputation system. I also think the community should be able to flag the files.

1 Like

You could set up a strawpoll for this. Anyways I’m definitely a fan of the second proposal. There are a lot of skilled developers out there that sure wouldn’t mind identifying some files.
Something extra that you could add though is once a file is fully reviewed by a staffmember you could have a little verified or checked symbol, indicating that the file you’re downloading is 100% safe. Something like the Facebook or Twitter verified symbol onto a file’s thumbnail? :slight_smile:

EDIT: Something like this:


I like the Verified symbol idea

But strawpoll is not handy because of custom ideas.

1 Like

I like the idea of a hybrid system, something like #3. Consider maybe something like:

  • Unreviewed: the plugin is not considered trusted and should only be used if the downloader agrees to the risk.
  • Community-reviewed: After (x) people have reviewed the code, it can be flagged as community-reviewed. This would have to be looked at carefully to prevent abuse by spinning up sockpuppet accounts- maybe by account age or a rep system.
  • Fully reviewed: While the project can never assert complete safety, a fully-reviewed plugin has been reviewed by staff who agree that it is likely to be safe.

Another option might be some kind of trust system built into the APIs?


I would personally like to see the third proposal go through. I have no additional comments on it, as others have already stated exactly I would like it.

1 Like

I prefer #1 but would settle for #3 with the verify symbol and/or rep system. I am torn between burnt out staff and having another InfiniteDispenser incident… Malicious updater for DDoS attacks…


Suggestion: No need to add a notice for Updater, Metrics or Mojang servers connections… :smiley: Only for things like silent auto-updaters.

1 Like

I like the third option. Also like the idea of putting reputation points for developers based on mod stability, creativeness, difficulty, downloads, etc;

1 Like

Possibly, but if we’re doing QA like that, it will be a lot of extra work and burn out staff members.

Maybe a mixture of method #1 and #3, having an eye on the newer devs and plugins projects which are less than 3 month or so online. But if the plugins are around for like half a year, switching to an automated scan would be good. Also, there could be some kind of recommendation to include the source files (e.g. check the “include source” option in eclipse export process).

Maybe there could be a 3-stage system like this (working with colors :smile: ):

  • First, there is the stage where the moderators approve the first version of the plugin to verify that there is no backdoor. The plugin gets a yellow “moderator approved”-sign in the plugin-list.
  • Next it gets a green “scanner approved”-sign in the list
  • If the source is public/included, and more than X community members have reviewed the project, stage 2 is entered: A blue “community approved”-sign in the list

Maybe the order of the stages is another, but I like the idea of such a system. It would be the most secure and would guarantee that older/known plugins are in the state of being very secure to use, while newer plugins will make their way to that state after some time.


If you’re going to work with some sort of scan system you could put a delay on the scan. That way people wont be able to test various methods of circumventing that scan in a small time frame by trial and error.

1 Like

I like 3 the best. The cons you mentioned are well worth the expedited review process - I don’t want the sponge staff to be spending a large part of their time just playing traffic cop. It’s a job nobody wants to do, and it will negatively impact both their productivity and their morale. Option 3 is a good balance.

1 Like

One of the concerns I had with this when I was considering it for DBO was that moderators would rely heavily on the flagging and the quality of the manual checks would suffer. This has caused pretty major issues in the past.

Thoughts on allowing any external links but with the use of an interstitial page/modal that tells users “Any file you download from site x has not been checked? Do you wish to continue?”?

Yeah, this is a really big issue and I’m not sure there’s a lot that can be done about it given the presumably voluntary nature of the job.

As I’m sure you are aware, we’ve had a number of cases on DBO where established authors have suddenly released malicious builds. I’d be concerned about this.

Flags can always be evaded and aren’t going to cover every possibility. Reflection and other techniques can fool the checking. I spent a lot of time developing tools for DBO and it ended up with a situation where we had a lot of false positives since there were so many ways to do malicious things.


There are a lot of ways to get around scanning, so it would only catch the most simple attempts at malicious code.


Would it be possible to have the mod author(s) do some of the certification themselves as part of the approval process? Let them run the tools and provide the reports as part of the submission. Also, would it be beneficial to prioritize the incoming submission queue based on factors such as existing author reputation, availability of source, whether self certify reports are provided, etc?

Personally, I’m for either 1 or 2, but with a modification. Instead of relying on decompilation, force every build to be CI-provided and linked with a commit in a publicly available repository. 3 sounds good at first, but is really useless; having submitted a legitimate plugin is no proof that there isn’t some malicious intent.

Assuming the developer owns/has access to their own CI, what would stop them from replacing the JAR artifact with a malicious one so that users inadvertently downloaded malicious code?

Of course, that would mean Sponge standardizing on a single (or an approved list of) CI providers. For example, Travis shows a link to the applicable commit for each build, so the config could be easily inspected.