Where does this information come from? It's difficult to research and I'm inclined to disagree about a few things, admittedly based only on instinct and logic.
I would assume the license that applies (legally binding) is the one that's been put forth at the 'point of no return' for a project, at release or publication, at which point there is no way to remove it, not just the point of when it was acquired - otherwise everyone could just say they downloaded when the author accidentally made a typo and labelled it as public domain in those first few minutes. The onus being on the licensee to keep updated with the agreement up until that point.
Also the proof of 2 things - that the one uploading it can prove 100% original authorship of the content, and proof that they are the one who uploaded it and labelled it as a particular license. It's foolish to pretend that everything that's uploaded and given a license on these kind of sites is 100% the property of the people uploading it, knowingly or not. I know a few things about public domain license, that something like a photo can technically be in the public domain, but almost everything in the actual photo (people, company trademarks, etc.) can not be used and makes you liable to lawsuits if you do.
At the end of the day, all of this would be settled in a lawsuit, and it's all about the type of evidence provided and validity of that evidence. Best case scenario, if things go south, is a settlement to pay to license the content after the fact and hope that it's a reasonable sum. I imagine this happens all the time, possibly years after a project has been released.
As for attribution, it's just not practical to credit every single person for every single screenshot. To me, attribution means having an official record of credits, a dedicated webpage and the 'credits sequence', unless explicitly demanded by the author in the license agreement. Trying to keep track of which texture on which polygon belongs to which artist and where it came from is just not a reasonable expectation, and sometimes not even possible.
As I said, it would be great to actually get some solid information about this stuff instead of assumptions, from legitimate sources and from people who actually know anything about copyright law. I find a lot of forum posts with pretty bold claims but nothing to actually back it up.
This is something I'm worried about too. Also - licenses that suddenly change behind-the-curtains, silently. (Poliigon, which suddenly went from offering creative commons content for free, to now expecting money for the use of that same content. Cubebrush is another one, that (sometimes) offers 'freebies' with commercial use at a cost of $10 for content that already allows commercial use in its original license).
I know that there are (US) laws protecting 'consumers' against having the rug pulled from under our feet, but perhaps the responsibility is on us to prove everything, and that's not quite so easy. I've made a habit of documenting as much as I can, including saving the EULA/TOS pages of websites I've outsourced from, at the time I've outsourced from them, but whether or not that is any kind of viable protection is not for me to answer.
Attribution is another tricky topic - is it a reasonable expectation that the author of outsourced content be credited every single time their work appears in a screenshot, for every single instance of a screenshot?
Ultimately, we'd need someone with actual knowledge about the law to answer these questions.
I think you're going to find it tough to get such a texture as CC0. I've looked pretty hard and not really finding anything. You could probably find enough public domain scraps to "kitbash" something up, though.
Anyone is welcome to copy+paste from that thread, if they'd like, but I tried to keep it to stuff that is "legacy-friendly" (now moreso Win7) and no browser-only stuff, so there's plenty more that could be added.
I still have a pretty huge amount of stuff to test, but I'm determined to keep the list growing and weed out the 404s, and make mirrors of lost software if need be.
Ah, it seems this TinkerCAD already performs CSG merging on export, so the screenshots you posted before really threw me off. Still, the decimation in Meshlab is far superior to the Blender modifier, to bring down the polycount even more and get them ready for baking the high-poly details back onto. If you need any specific help in any other area, feel free to ask me. Maybe all this stuff I posted before will be of use to anyone else looking at kit-bashing their way into content creation.
The first paragraph of my previous post, the link to that webpage I posted, is probably the most important thing. Sometimes you might halve the polycount of your model, or even more. These are just a couple of easy steps in Meshlab so I doubt it's beyond anyone's abilities. For that example, something built out of virtual LEGO blocks, the reduction in polycount must be pretty huge there. In my experience, it usually leaves more internal leftovers than any gaps in the outer geometry.
First, for removing internal/hidden geometry (which is likely the biggest concern), it can be done with baking a bunch of lights onto the model and then using that information to colourise the vertices and then use that as a selection, deleting everything that isn't affected by the light. Another technique (which is much easier and quicker, although a little less reliable) is this:- http://meshlabstuff.blogspot.com/2009/04/how-to-remove-internal-faces-wi... This will shave a bunch of triangles and also help to minimise any overdraw issues when the mesh is being rendered in real-time. Neither approach is 100% reliable, depending on the mesh(es), light can leak through or not be thoroughly caught, resulting in either triangles remaining inside or holes on the outermost geometry. For the latter case, you can try to "close holes".
Then do a bit of cleaning up with "remove duplicate faces, remove duplicate vertex, remove unreferenced vertex, remove zero area faces". Infact this is probably worth doing at different intervals, and definitely at the end of the process.
For merging everything together, you would need a "CSG" operation or a surface reconstruction (Poisson) - one requiring extra time to piece all the different meshes together, and the other requiring a lot of computer power, and resulting in a more high-poly and organic shape. To the former for machines like robotics, and the latter for organic objects like creatures, I'd say. This will make your model "watertight" and an enclosed contiguous mesh, which is what a GPU wants to chew on, but it would also create a lot of tiny useless triangles and possibly some infinitely sharp ones, so those need to be cleaned up and the entire mesh simplified.
For cleaning and optimising the model, use "Quadratic edge collapse decimation", or (in combination with) some other simplification methods. Every model is different and you'll just need to play with parameters until you get something that brings down the polycount, helps get a consistent topology and doesn't make your mesh too ugly in the process. Depending on if it's something mechanical or organic, different methods and parameters would be used.
At this point, you would really need to weigh 2 considerations - if the extra amount of triangles (even after cleaning up) is a worthwhile benefit over the possible overdraw penalties from how it originally was. It should be, but not always, so keep your eye on the polycount when you're doing this. Each case would be different and there's no general advice to give. To generalise, a modest amount of a higher polycount is worth it against the overdraw and batches that a "kit-basher" will face in real-time rendering.
Finally, you can do things like take the original version, using subdivision, texture-painting, displacements, adding extra geometric details, etc. and unwrapping your clean low-poly version, and baking everything onto that. Textures, baked lighting/AO, normalmaps, etc. Auto-unwrapped UVs are never going to be as good as a UV layout that has been carefully unwrapped and orientated manually, but you might be lucky come out with minimal artifacts and decent texel scales. For the high-poly version you're going to bake from, there are basically no rules, just do anything to make it how you want it to look and bake it all out.
If you want to animate, that's a different story, but it may just require some cutting up the mesh to create a couple of new loops for joints, with a little bit of cleaning. Again, it's too case-specific to go into, but sometimes a model can be easy to set up for animators with a few little cuts here and there.
Ultimately, at the end of all this, the real question is - is it worth doing all of this or just making a real-time 3D model correctly - from the beginning?
The main issues I can tell (without too much examination) - bad topology = inconsistent detail concerning the size of triangles in relation to each other, and intersections and hidden geometry, which not only wastes triangles but potentially causes unnecessary overdraw issues. Neither of these issues are very easy to fix but if so, Meshlab would be the best (free) bet.
Where does this information come from? It's difficult to research and I'm inclined to disagree about a few things, admittedly based only on instinct and logic.
I would assume the license that applies (legally binding) is the one that's been put forth at the 'point of no return' for a project, at release or publication, at which point there is no way to remove it, not just the point of when it was acquired - otherwise everyone could just say they downloaded when the author accidentally made a typo and labelled it as public domain in those first few minutes. The onus being on the licensee to keep updated with the agreement up until that point.
Also the proof of 2 things - that the one uploading it can prove 100% original authorship of the content, and proof that they are the one who uploaded it and labelled it as a particular license. It's foolish to pretend that everything that's uploaded and given a license on these kind of sites is 100% the property of the people uploading it, knowingly or not. I know a few things about public domain license, that something like a photo can technically be in the public domain, but almost everything in the actual photo (people, company trademarks, etc.) can not be used and makes you liable to lawsuits if you do.
At the end of the day, all of this would be settled in a lawsuit, and it's all about the type of evidence provided and validity of that evidence. Best case scenario, if things go south, is a settlement to pay to license the content after the fact and hope that it's a reasonable sum. I imagine this happens all the time, possibly years after a project has been released.
As for attribution, it's just not practical to credit every single person for every single screenshot. To me, attribution means having an official record of credits, a dedicated webpage and the 'credits sequence', unless explicitly demanded by the author in the license agreement. Trying to keep track of which texture on which polygon belongs to which artist and where it came from is just not a reasonable expectation, and sometimes not even possible.
As I said, it would be great to actually get some solid information about this stuff instead of assumptions, from legitimate sources and from people who actually know anything about copyright law. I find a lot of forum posts with pretty bold claims but nothing to actually back it up.
This is something I'm worried about too. Also - licenses that suddenly change behind-the-curtains, silently.
(Poliigon, which suddenly went from offering creative commons content for free, to now expecting money for the use of that same content. Cubebrush is another one, that (sometimes) offers 'freebies' with commercial use at a cost of $10 for content that already allows commercial use in its original license).
I know that there are (US) laws protecting 'consumers' against having the rug pulled from under our feet, but perhaps the responsibility is on us to prove everything, and that's not quite so easy. I've made a habit of documenting as much as I can, including saving the EULA/TOS pages of websites I've outsourced from, at the time I've outsourced from them, but whether or not that is any kind of viable protection is not for me to answer.
Attribution is another tricky topic - is it a reasonable expectation that the author of outsourced content be credited every single time their work appears in a screenshot, for every single instance of a screenshot?
Ultimately, we'd need someone with actual knowledge about the law to answer these questions.
I always think Elder Scrolls IV Oblivion deserves a mention, it's far superior to Skyrim.
I think you're going to find it tough to get such a texture as CC0. I've looked pretty hard and not really finding anything.
You could probably find enough public domain scraps to "kitbash" something up, though.
If you're willing to be more flexible with licensing, then Evillair just released some awesome new stuff:-
http://www.evillair.net/v4/gametextures/
PhilipK also provides some Doom/Quake style materials that can be used:-
http://www.philipk.net/
And, from me, just incase this might help with your project:-
http://www.violationentertainment.com/temp/dmgrmygrnswtchywll1.zip
(might look familiar)
I am meaning to upload this and several others here soon.
Anyone is welcome to copy+paste from that thread, if they'd like, but I tried to keep it to stuff that is "legacy-friendly" (now moreso Win7) and no browser-only stuff, so there's plenty more that could be added.
I've been compiling and curating a list of (mostly) free software (and resources) here:-
http://www.violationentertainment.com/wiki/tiki-index.php?page=Free
I still have a pretty huge amount of stuff to test, but I'm determined to keep the list growing and weed out the 404s, and make mirrors of lost software if need be.
Ah, it seems this TinkerCAD already performs CSG merging on export, so the screenshots you posted before really threw me off. Still, the decimation in Meshlab is far superior to the Blender modifier, to bring down the polycount even more and get them ready for baking the high-poly details back onto. If you need any specific help in any other area, feel free to ask me. Maybe all this stuff I posted before will be of use to anyone else looking at kit-bashing their way into content creation.
The first paragraph of my previous post, the link to that webpage I posted, is probably the most important thing. Sometimes you might halve the polycount of your model, or even more. These are just a couple of easy steps in Meshlab so I doubt it's beyond anyone's abilities. For that example, something built out of virtual LEGO blocks, the reduction in polycount must be pretty huge there. In my experience, it usually leaves more internal leftovers than any gaps in the outer geometry.
First, for removing internal/hidden geometry (which is likely the biggest concern), it can be done with baking a bunch of lights onto the model and then using that information to colourise the vertices and then use that as a selection, deleting everything that isn't affected by the light. Another technique (which is much easier and quicker, although a little less reliable) is this:-
http://meshlabstuff.blogspot.com/2009/04/how-to-remove-internal-faces-wi...
This will shave a bunch of triangles and also help to minimise any overdraw issues when the mesh is being rendered in real-time. Neither approach is 100% reliable, depending on the mesh(es), light can leak through or not be thoroughly caught, resulting in either triangles remaining inside or holes on the outermost geometry. For the latter case, you can try to "close holes".
Then do a bit of cleaning up with "remove duplicate faces, remove duplicate vertex, remove unreferenced vertex, remove zero area faces". Infact this is probably worth doing at different intervals, and definitely at the end of the process.
For merging everything together, you would need a "CSG" operation or a surface reconstruction (Poisson) - one requiring extra time to piece all the different meshes together, and the other requiring a lot of computer power, and resulting in a more high-poly and organic shape. To the former for machines like robotics, and the latter for organic objects like creatures, I'd say.
This will make your model "watertight" and an enclosed contiguous mesh, which is what a GPU wants to chew on, but it would also create a lot of tiny useless triangles and possibly some infinitely sharp ones, so those need to be cleaned up and the entire mesh simplified.
For cleaning and optimising the model, use "Quadratic edge collapse decimation", or (in combination with) some other simplification methods. Every model is different and you'll just need to play with parameters until you get something that brings down the polycount, helps get a consistent topology and doesn't make your mesh too ugly in the process. Depending on if it's something mechanical or organic, different methods and parameters would be used.
At this point, you would really need to weigh 2 considerations - if the extra amount of triangles (even after cleaning up) is a worthwhile benefit over the possible overdraw penalties from how it originally was. It should be, but not always, so keep your eye on the polycount when you're doing this. Each case would be different and there's no general advice to give. To generalise, a modest amount of a higher polycount is worth it against the overdraw and batches that a "kit-basher" will face in real-time rendering.
Finally, you can do things like take the original version, using subdivision, texture-painting, displacements, adding extra geometric details, etc. and unwrapping your clean low-poly version, and baking everything onto that. Textures, baked lighting/AO, normalmaps, etc. Auto-unwrapped UVs are never going to be as good as a UV layout that has been carefully unwrapped and orientated manually, but you might be lucky come out with minimal artifacts and decent texel scales. For the high-poly version you're going to bake from, there are basically no rules, just do anything to make it how you want it to look and bake it all out.
If you want to animate, that's a different story, but it may just require some cutting up the mesh to create a couple of new loops for joints, with a little bit of cleaning. Again, it's too case-specific to go into, but sometimes a model can be easy to set up for animators with a few little cuts here and there.
Ultimately, at the end of all this, the real question is - is it worth doing all of this or just making a real-time 3D model correctly - from the beginning?
The main issues I can tell (without too much examination) - bad topology = inconsistent detail concerning the size of triangles in relation to each other, and intersections and hidden geometry, which not only wastes triangles but potentially causes unnecessary overdraw issues. Neither of these issues are very easy to fix but if so, Meshlab would be the best (free) bet.
Pages