$12256 / $11500
There's been a lot of talk recently about improving the LPC assets, but we haven't had a dedicated thread for it. Most of the talk has been around the characters, but I'm sure some of us have ideas about improving the tilesets and other assets as well. Feel free to discuss either here.
To get the discussion started, some things I've seen brought up are:
I've personally been working on extending the Muscular, Child, and Pregnant bases to have a full set of animations.
Personally, I chose to not use ImageMagick but Python with Pillow and numpy for my scripts. It is maybe a bit slower (can take 1 or 2 seconds to process the full spritesheet for certain operations) but I can do whatever I want.
As Baŝto noted, indexed PNG has some drawbacks and I find it simpler to just use a one channel grayscale image to store the indices and a separate RGBA PNG file with one row to store the palette. The main drawback is that you have no "standard" palette that goes with the image and you need to apply a palette to modify the image with sensible colors, and then to get back the new indices. It is a bit cumbersome.
You can have a look of what it looks like in this repo: the "indices" folder contains the grayscale images, the "palettes" folder the RGBA PNG with one row, "body_parts.json" makes the link between the grayscale image and the palettes, and allows to apply several palettes to one grayscale image. "generate_colored_images.py" is the Python script that generates all the images in "images" folder.
I have three scripts to manage indices and palettes:
- get_palette: it takes a RGBA image as input and outputs the palette (with an option to merge similar colors), and the indices.
- get_indices: it takes a RGBA image and a palette as inputs and outputs the indices (and an error if there is a color not present in the given palette).
- apply_palette: it takes a grayscale image and a palette as inputs and outputs a RGBA image.
I am using the Git LFS extension to manage the PNG (and the other binary files) of my repo and it works well so far. It is supported on GitLab and GitHub (never tried on this platform).
I also have scripts to transform a male head to a female head (basically just a list of offsets). And a script to detect and fix inconsistencies between the west and east facing animations.
If you have an image, grayscale and a palette ... you can as well have an image and a paletted version, which just lacks the alpha data. You can also RGBA images and palettes, like I do in javascript, but that will give you trouble for shared colors, which you can indeed manually fix in your approach. But I imagine it's harder to fix manually, since you do not directly see the colors you assign in the greyscale image. Your additional data can be lost more easily than an embedded palette, when somebody makes a remix without using the tools.
I think I chose PyPNG because all methods of PIL.ImagePalette.ImagePalette are marked as experimental and badly documented
I just have grayscale images and palettes. RGBA images are only generated to preview or make edits with true colors (and then I get back a grayscale image).
This way I am sure that I don't have wrong/obsolete recolored images that live somewhere. There is only one truth for the indices.
I don't claim this is ideal but I wanted to share the workflow I am using. And I think separating the shapes (the indices) from the colors (the palettes) make sense, philosophically at least.
Good points all. Appreciate the comments about indexed palettes and alpha transparency. Another disadvantage of indexed PNGs is they are not supported by all editing software (for instance, PyxelEdit which I mostly like).
pvigier, could you share your get_palette , get_indices , and apply_palette scripts? I don't see them in your repo, but they would be helpful.
Perhaps I'm misunderstanding, but I think that can be solved by having a "source" directory where the shape/index images and palettes live, and an "output" (or "build" or "bin" or whatever) where generated images live. Images in "output" never get directly edited.
I'll have to mess around with ImageMagick, Pillow/Numpy, and PyPNG and decide what works best. May be a combination of all three, depending on the specific task. If I end up using any of the original Ruby scripts, I'll probably port them to Python so there are not too many languages at play. Slow is not the end of the world, but it depends how slow and how often the images need to be "re-built". If an artist can edit something that looks somewhat like the final product, then only infrequently re-build the derived/output images, it's fine if the scripts are kind of slow. But if every time you save, you have to run the scripts, speed will matter more.
Yes, that is exactly the advantage. I imagine a set of shell tools that implement the verbs I described above. Then a Makefile (or similar, not married to Make specifically), combined with a standard set of index/shape input images, which calls the shell tools to generate all the output images (producing a collection of layer-able spritesheets, similar to those in castelonia's current generator). If someone wants to do something different, they can call the shell tools directly, or create a fork with a different Makefile.
The only thing I don't like about pvigier's approach is that the shapes/indices (https://gitlab.com/vagabondgame/lpc-characters/-/tree/master/indices) cannot be edited directly in a practical way (one would have to convert them to RGBA, then convert the edited RGBA back to indexed; not impossible, but kind of a pain). By contrast, the hair shapes in https://github.com/jrconway3/Universal-LPC-spritesheet/tree/master/_buil... are RGBA images, but they use a standardized palette so they are directly viewable/editable. That standardized palette can differ for different types of images (so for instance, I could use a different palette for the shields than for hair). I think I like this solution better.
Sorry, what I was thinking about was taking one index/shape image and making say 10 different recolors with 10 different palettes.
I haven't figured out how we should handle objects with multiple "materials" (e.g. several independent color palettes). For instance, the base bodies and their eyes, or the helmets that ElizaWy just posted and their red plumes. I suppose an advantage of the JSON palettes here is that different materials could have different standard palettes, which could be concatenated unambiguously.
Here they are: https://github.com/pvigier/lpc-scripts.
I don't like that either! :p I think the main reason I staid with this approach is that my game is consuming grayscale images and palettes directly.
I think this is what makes the most sense from an artist point of view: to have an RGBA image that defines the indices and the canonical palette, and that can be edited directly. To compute the canonical palette, we could just get the colors in the image and sort them by lexicographic order.
However, I see some issues:
* if we want to change a color in the canonical palette, we may change their orders and all the associated palettes are invalidated and need to be updated.
* we can't share the palettes between images if they don't use the exact same set of colors.
But I am afraid there is no silver bullet.
I tried this too on the leather cap (to recolor the cap and the feather). My approach was to have a main palette that defines all the colors in the image, and several subpalettes to replace a subset of colors of the main palette. I had a JSON file to store metadata (e.g. the colors in "blue_feather.png" will replace the colors 2, 3, 5 in "leather_cap.png").
Are the grayscale images grayscale with alpha?
Is pyxeledit capable of opening indexed PNG? Does it only restrict saving to RGBA?
You can just call it multiple times for that. There is a bit more overhead, because the image has to be read multiple times, but that's about it.
I'd prefere to focus on subpalettes
You basically can do that with gpl too, it's a color per row and the readers just ignored any lines which werent RGB values. I don't know if that's standard conform and I have to test if pillow is capable of reading that. But you can also properly concatenate them with something like cp whatever.gpl whatever_new.gpl; tail whatever2.gpl -n+5 >> whatever_new.gpl or you can filter them out with something like sed "/^\(#\|GIMP\|Name\|Columns\)/d" whatever2.gpl >> whatever_new.gpl
I'd like to allow to feed multiple palettes into my tool, as well as an offset if you just wanna change one subpalette. It should keep the other colors in the palette untouched and also keep the transparent index.
I always place subpalettes one after another and sort them darkest to lightest color.
I should try to switch to pillow, it's more likely to be already installed and it should be doable. It does come with it's own .gpl reader, but my code size will very likely grow.
Next issue would be a duplimap written with pillow, since I really hate what Image Magick does to the palette. Currently I have to do palette swapping before I feed things into Image Magick.
GitLFS has the same centralization issues like SVN, the images are all stored at one place. I don't know how strongly they are connected to domains, but that could make it hard to preserve or fork hard.
I guess the advantage of the JSON format (or similar) with canonical palettes is that it is an explicit mapping from one color to another, so it is not dependent on the colors being in an arbitrary order.
For instance, basxto has run into issues with ImageMagick sorting its indexed palettes and thus messing up the mapping from one color to another. Relatedly, if you have (for example) a 6-color metal palette and you create an image with 5 additional colors (for plumage), you have to ensure the two palettes stay properly sorted (e.g. the first 6 colors in the palette are metal, the next 5 are plumage), and the recolor script has to know this, and is has to never be messed up by any tooling in between. On the other hand, with the JSON mapping solution, as long as the canonical palettes for plumage and metal use different colors, there is no confusion and no possibility of tools screwing up the ordering of the palette.
So I guess I'm talking myself into having canonical palettes per-material and then mapping between those canonical palettes (using JSON or another format; for instance, a 2 x n pixel PNG image, where n is the number of colors in the palette).
I haven't thought about the issues of forking with git LFS... I'm not sure how that works. OTOH, LFS might not be necessary... castelonia's current character creator repository is only ~100 mb, which is not terrible. GitHub has nice tools for visual diffs between binary images. I'm happy to consider other alternatives if you have ideas, but git/GitHub still seems like the most obvious solution for the time being...
I did not notice at first, that this json format encodes a color mapping and not palettes.
switchpalette.sh actually takes two palettes and maps every color in the first one to the color with the same index in the second one, that works with RGBA. I wrote loadgpl becaus I worked with a lot of subpalettes who shared colors at that time. The implementation of switchpalette.sh also has problems, because one color is mapped after another, this leads to errors when source and target palettes share colors. This also leads to errors, you sometimes do not immediately notice, when the source image has some wrong color values. That's one of the main reasons why I hated that I had to do such a translation in javascript, but I think I replaced all colors at the same time there.
gitg and such tools also have image diffs integrated
I don't think size itself will be much of an issue with 64x64/32x32 pixelart.
Edit regarding size:
Indexed PNGs can be smaller.
On example "Bascinet, Raised, Plumage.png":
Original size: 54.2KiB (8bit RGBA)
Just running optipng: 12.3KiB (4bit indexed)
Filling all transparent colors with black in GIMP and then making black transparent, then saving with maximum compression and minimal additional data: 31.6KiB (8bit RGBA)
Just saving it again with mtPaint makes it grow to 32.4KiB (8bit RGBA), that's the difference in PNG compresison different tools do with highest level (9) compression.
Running optipng on this: 10.5KiB (4bit indexed)
Just making the gimp image into 8bit paletted with mtPaint: 11.5KiB (8bit indexed)
mtPaint does not support <8bit. It can be seen that cleaning up transparent colors can also make quite a different. The first run of optipng actually created an indexed image with an alpha channel, there is no index marked as transparent, I'm not sure whether that's a non-standard thing to do or not.
Color values make a difference because of the RLE, which is part of the DEFLATE algorithm, which again PNG uses for compression.
So that's a different thing we should/could take into account. All transparent area should have the same color. Original LPC spritesheets already do that wrong. If you use paletted PNG with just one index being transparent, you are guaranteed to not do that. But disabling alpha in the editor (that's different from removing it from the image) should be the same, it allows you to see what colors are transparent.
edit2:
The automated way with Image Macick + pngcrush + optipng would be:
file=Bascinet,\ Raised,\ Plumage.png; magick "${file}" -background black -flatten -transparent black "${file}_im.png"; pngcrush -rem alla -rem text "${file}_im.png" "${file}_cru.png"; optipng "${file}_cru.png" -out "${file}_opti.png"
Optipng only saves 10 bytes in this example, since pngcrush already compressed it well.
2021-01-31_0207_720x301.png 46.3 Kb [0 download(s)]
Hi everyone -
I know it's been a while, but I did follow up on my threat to create a set of command line tools for editing LPC spritesheets (and other pixel art images). More details here https://opengameart.org/forumtopic/release-lpctools-tools-for-manipulati... tile-sets and examples in the github repo https://github.com/bluecarrot16/lpctools .
I will be working with castelonia to use these tools to improve the Universal Spritesheet Generator https://github.com/sanderfrenken/Universal-LPC-Spritesheet-Character-Gen... , in particular, creating many more automatic recolors of clothing and re-introducing a process for automatically building hairstyles, hats, and shields with these tools.
I'd really appreciate your comments and suggestions!
Pages