Primary tabs

Comments by User

Tuesday, October 10, 2023 - 12:32

They updated their terms April 4, 2023 https://pixabay.com/service/terms/:

3. CC0 License

Some of the Content made available for download on the Service is subject to and licensed under the Creative Commons Zero (CC0) license ("CC0 Content"). CC0 Content on the Service is any content which lists a "Published date" prior to January 9, 2019. This means that to the greatest extent permitted by applicable law, the authors of that work have dedicated the work to the public domain by waiving all of his or her rights to the CC0 Content worldwide under copyright law, including all related and neighboring rights. Subject to the CC0 License Terms, the CC0 Content can be used for all personal and commercial purposes without attributing the author/ content owner of the CC0 Content or Pixabay.

---

Luckily there are strong precedents against companies retroactively enforcing license agreement changes.

cc0 gives you the freedom to distribute it under more restrictive terms

"Please be aware that while all Images and Videos on Pixabay are free to use for commercial and non-commercial purposes, depicted items in the Images or Videos, such as identifiable people, logos, brands, etc. may be subject to additional copyrights, property rights, privacy rights, trademarks etc. and may require the consent of a third party or the license of these rights - particularly for commercial applications."

That is generally the case, maybe not the "additional copyrights", but the rest definitely. Creative commons is based on copyright and gives you only freedoms regarding copyright.

If you use something commercially you also have to follow commercial property protections (patents, trademarks …)

The right of one's own picture is a personal right derived from human dignity. If you allow somebody to photograph you that permission is always in a certain context. You can put images into a different context via captions and modifications (especially porn captions, fake nudes and deep fakes). In the worst case that can be defamition, which is a crime in some countries.

You can neither do anything else that constitutes a crime.

 

Monday, March 29, 2021 - 02:20

"Tannenbaum" works, but "Baum" doesn't. Maybe they tried to find parts of the compound words.

Monday, March 29, 2021 - 01:31

I basically always Ctrl-A Ctrl-C before I reply.

Occurs always when I use tripple dots, correct quotes and correct apostrophe. (edit: each single one of them breaks it)

https://n0paste.tk/WmI04lg/

Monday, March 1, 2021 - 12:49

Hehe, I fixed the character order.
But be warned, the alignment is also off.

Saturday, January 30, 2021 - 20:12

I did not notice at first, that this json format encodes a color mapping and not palettes.

switchpalette.sh actually takes two palettes and maps every color in the first one to the color with the same index in the second one, that works with RGBA. I wrote loadgpl becaus I worked with a lot of subpalettes who shared colors at that time. The implementation of switchpalette.sh also has problems, because one color is mapped after another, this leads to errors when source and target palettes share colors. This also leads to errors, you sometimes do not immediately notice, when the source image has some wrong color values. That's one of the main reasons why I hated that I had to do such a translation in javascript, but I think I replaced all colors at the same time there.

GitHub has nice tools for visual diffs between binary images. I'm happy to consider other alternatives if you have ideas, but git/GitHub still seems like the most obvious solution for the time being...

gitg and such tools also have image diffs integrated

I don't think size itself will be much of an issue with 64x64/32x32 pixelart.

Edit regarding size:

Indexed PNGs can be smaller.
On example "Bascinet, Raised, Plumage.png":

Original size: 54.2KiB (8bit RGBA)

Just running optipng: 12.3KiB (4bit indexed)

Filling all transparent colors with black in GIMP and then making black transparent, then saving with maximum compression and minimal additional data: 31.6KiB (8bit RGBA)

Just saving it again with mtPaint makes it grow to 32.4KiB (8bit RGBA), that's the difference in PNG compresison different tools do with highest level (9) compression.

Running optipng on this: 10.5KiB (4bit indexed)

Just making the gimp image into 8bit paletted with mtPaint: 11.5KiB (8bit indexed)

mtPaint does not support <8bit. It can be seen that cleaning up transparent colors can also make quite a different. The first run of optipng actually created an indexed image with an alpha channel, there is no index marked as transparent, I'm not sure whether that's a non-standard thing to do or not.

Color values make a difference because of the RLE, which is part of the DEFLATE algorithm, which again PNG uses for compression.

So that's a different thing we should/could take into account. All transparent area should have the same color. Original LPC spritesheets already do that wrong. If you use paletted PNG with just one index being transparent, you are guaranteed to not do that. But disabling alpha in the editor (that's different from removing it from the image) should be the same, it allows you to see what colors are transparent.

edit2:

The automated way with Image Macick + pngcrush + optipng would be:

file=Bascinet,\ Raised,\ Plumage.png; magick "${file}" -background black -flatten -transparent black "${file}_im.png"; pngcrush -rem alla -rem text "${file}_im.png" "${file}_cru.png"; optipng "${file}_cru.png" -out "${file}_opti.png"

Optipng only saves 10 bytes in this example, since pngcrush already compressed it well.

Saturday, January 30, 2021 - 12:44

Are the grayscale images grayscale with alpha?

Is pyxeledit capable of opening indexed PNG? Does it only restrict saving to RGBA?

Sorry, what I was thinking about was taking one index/shape image and making say 10 different recolors with 10 different palettes.

You can just call it multiple times for that. There is a bit more overhead, because the image has to be read multiple times, but that's about it.

I'd prefere to focus on subpalettes

I haven't figured out how we should handle objects with multiple "materials" (e.g. several independent color palettes). For instance, the base bodies and their eyes, or the helmets that ElizaWy just posted and their red plumes. I suppose an advantage of the JSON palettes here is that different materials could have different standard palettes, which could be concatenated unambiguously.

You basically can do that with gpl too, it's a color per row and the readers just ignored any lines which werent RGB values. I don't know if that's standard conform and I have to test if pillow is capable of reading that. But you can also properly concatenate them with something like cp whatever.gpl whatever_new.gpl; tail whatever2.gpl -n+5 >> whatever_new.gpl or you can filter them out with something like sed "/^\(#\|GIMP\|Name\|Columns\)/d" whatever2.gpl >> whatever_new.gpl

I'd like to allow to feed multiple palettes into my tool, as well as an offset if you just wanna change one subpalette. It should keep the other colors in the palette untouched and also keep the transparent index.

I always place subpalettes one after another and sort them darkest to lightest color.

I should try to switch to pillow, it's more likely to be already installed and it should be doable. It does come with it's own .gpl reader, but my code size will very likely grow.

Next issue would be a duplimap written with pillow, since I really hate what Image Magick does to the palette. Currently I have to do palette swapping before I feed things into Image Magick.

GitLFS has the same centralization issues like SVN, the images are all stored at one place. I don't know how strongly they are connected to domains, but that could make it hard to preserve or fork hard.

Saturday, January 30, 2021 - 06:58

If you have an image, grayscale and a palette ... you can as well have an image and a paletted version, which just lacks the alpha data. You can also RGBA images and palettes, like I do in javascript, but that will give you trouble for shared colors, which you can indeed manually fix in your approach. But I imagine it's harder to fix manually, since you do not directly see the colors you assign in the greyscale image. Your additional data can be lost more easily than an embedded palette, when somebody makes a remix without using the tools.

I think I chose PyPNG because all methods of PIL.ImagePalette.ImagePalette are marked as experimental and badly documented

Saturday, January 30, 2021 - 04:06

It could be interesting to minimize the differences between males and females, but that could break compatibility.

big Makefile (or similar)

I would advise against make, I can't say anything about other build systems, since I'm not familiar with them.
The best thing about make is that it only builds what changed.

I started with a makefile in modular bodies.
There were two things, I perceived as big downsides:

  1. You have to know what you can build, there is no way for listing all options, as it chooses the rules based on the path you give it
  2. It only allows the placeholder % once, which means if you allow costum item names, you have to manually add a rule for each animation, each recolor and actually each possible combination of both

That's why I started to switch to simple build script for my portaits remix.

I used --usage to tell people the naming scheme it expects.

--list semi-dynamically lists all posibillities by directly getting file names from folders.

I reused parts of --list for a --random paramater, that just randomly picks a possible combination. I used that for generating previews, but that is more relevant when you can select a lot of variations.

- coerce: takes an RGBA image and a palette (in .gpl, JSON, or PNG format), and produces an indexed PNG image which only uses colors from the palette (optionally, could force "nearby" but non-matching colors to use those from the palette)

I did not start to implement such a thing, but I thought about it a bit. I think you'd want it to warn you if there are colors in the image, which are not part in the palette. Ideally it should also be possible to produce an image with all colors, who are off.

There is one big downside of indexed PNGs: it does not allow semi-transparency. So shadows have to be either be in a different image or you need a script that can convert those to RGBA PNGs again and make all #322125 60% opaque.

The shadows shouldn't be of interest for the spritesheets, since I don't think it's used there. But it is significant if you try to use it with tilesets.

- recolor: takes an indexed image and applies a palette (or set of palettes)

My tool does currently not support multiple palettes, but it should work if multiple palettes get concatenated with cat beforehands. It also currently discards information about what index is transparent.

- collapse_recolors: takes a set of images (that are recolors of one another) and produces a list of palettes, such that each recolor can be generated from the base image with `recolor`

Extracting the color index from an image and dumping it into .gpl shouldn't be much work. Doing this with RGBA images or images with differently ordered indices would be a lot harder to (near) impossible. Especially images who effectively use multiple palettes and share colors between those will be a problem, since you don't know what color you should map it to.

The base assets are actually an example for that, since they have skin and eye colors. I did not split them in my submissions yet.

- collapse: opposite of `distribute`: takes a full animation and collapses it into a minimal number of images, as well as a map of offsets.

Yep, it might be handy to detect duplicates, mirrors and shifted versions of them. Not a hard problem, since you will mostly just trim transparent pixels and maintain a dictionary, but neither that trivial.

- offset: offsets each frame in an animation by specified amount(s) (for fixing bugs)

You could just autogenerate a mapping for that.

Friday, January 29, 2021 - 19:44

As far as I know, no assets have been released for TheraHedwig's run or jump yet

Only the original ones + wulax's have a bigger pool of assets

I HEAVILY advise you to preview animations

Yes, that's in general a good idea, I guess. I haven't really looked at all available one yet.

Or even clear what it was supposed to be. In the end, I've found it would have been better looking and easier to draw things from scratch.

Most people don't seem to realize that the 'walk' rows are actually a standing frame and then an eight-frame walk, and what you get is a nine-frame animation where the legs stutter and splay out every cycle.

That's true, but all original animations start with a standing frame. Walking isn't the only one. Thrusting, bow shooting and jumping start with a modified version of it.

Are there any substantive differences other than the breast sheet between "male" and "female" bases?

The jaw lines of heads are difinitely different, so are the brow line related shadows. Ear angle is different.
Male sholders are broader, which results in completely different arm angles.
Male legs are farther apart, which again results in completely different angles.
Female torse has a more accentuated waist than the male one, which is combined with thicker looking thighs. To make the thighs thicker, the lower parts of the legs are thinner. Males appear to be positioned one pixel higher, probably to make them look taller.

The only thing, which is the same, is the upper half of the head.

This is a fine idea. Ideally we can make tools to do this

General image manipulation tools like Image Magick can easily do that already. But you can also do it with mappings, for splitting you just have two maps and one source image, but that's a bit misappropriation.

off-site version control system

It's a bit tricky to decide on what to use, since git won't work well and svn being very centralized, which can be very impractical if the repo gets deserted and dies. Images are binary data, which works badly with diff-based tools like git. But when git or such is used, it would be a lot more handy for version control to have an image per frame. It's easier to sew which frames were changed in commits, it's easier to merge commits and image diffs are harder to see in bigger images. This is a lot harder to work with drawing-wise, though.

a head / hair accessory placement script would be an absolute godsend

My scripts should be able to that, they come with head position mappings for the original and wulax's animations. It would just have to have the same layout as my heads, which covers all 15 unique heads. The four directions + special angles for "hurt" + closing eyes from casting. I positioned male and female heads so, that the upper half of the head is at exactly the same position, so hats should work for both. But it does not include anything to handle hands/arms which overlap the head.

a standard single-file assembly order as well

I think so, too. The order matters, we should place the most important animations at first. If somebody starts to draw and loses mativation/interest, only the first animations will be covered. The die/hurt animation is usually packed at the end, because it only colors one direction, but I would put that at top to increase compatibility.
I would give higher priority to animations which are needed for NPCs like idle, walking, running and maybe jumping. And lower proriety to those who are era/theme dependant like bow shooting, gun shooting and spell casting. Slash and thrust should work in any era ... clubs, swords, spears, iron pipes, laser swords and practically any longer blunt object.

Pages