emkay Posted June 25, 2012 Share Posted June 25, 2012 There is no logical bug AFAIK - the process is random. There is a logical bug. Probably it has to do with the vaules of the colours themselves and there is no intention. Look at the picture.... They both were based on the same amount of 17 indexed colours. The lower one has 587 million evaluations and dist. of 6.45 The upper one hae only 66M evaluations and a better distance of 5.72 Only the values of the colours have changed. Quote Link to comment Share on other sites More sharing options...
emkay Posted June 25, 2012 Share Posted June 25, 2012 (edited) A Detail-Speedup helper is easily to build... It's something like "Phase Altering Line" 52K evaluations, 0,5 distance.... up to 8 colours per scanline. This picture can be named "finished" After this is reached, and you want to have more details, just put the picture in , non-inverted.... to have this: from this: In less than 9M evaluations. In that state, the converter handles the details and colours with a more exact "care"... Edited June 25, 2012 by emkay Quote Link to comment Share on other sites More sharing options...
ilmenit Posted June 25, 2012 Author Share Posted June 25, 2012 And where the bug is? 1. Colour distance is calculated in YUV (default) space. For different colours the distance is different. Having a smaller distance does not mean that for human eye the picture is always more or less similar. 2. The process is random. Run a few instances and choose the best result. Quote Link to comment Share on other sites More sharing options...
emkay Posted June 25, 2012 Share Posted June 25, 2012 For different colours the distance is different. This is the bug, if this means that "blue" has a higher priority in the distance value than "red". If this is so, the generator would do better to draw red parts 1st, then green parts and blue parts last. (for example) Quote Link to comment Share on other sites More sharing options...
+Stephen Posted June 25, 2012 Share Posted June 25, 2012 For different colours the distance is different. This is the bug, if this means that "blue" has a higher priority in the distance value than "red". If this is so, the generator would do better to draw red parts 1st, then green parts and blue parts last. (for example) Is that really a bug though? YUV colourspace is somewhat odd in this manner. Quote Link to comment Share on other sites More sharing options...
ilmenit Posted June 25, 2012 Author Share Posted June 25, 2012 This is the bug, if this means that "blue" has a higher priority in the distance value than "red". Emkay, read some more about the color spaces... I won't copy-paste all the theory about it here. Use /distance=euclid if you don't like the YUV space but expect visually worse results. Quote Link to comment Share on other sites More sharing options...
emkay Posted June 25, 2012 Share Posted June 25, 2012 For different colours the distance is different. This is the bug, if this means that "blue" has a higher priority in the distance value than "red". If this is so, the generator would do better to draw red parts 1st, then green parts and blue parts last. (for example) Is that really a bug though? YUV colourspace is somewhat odd in this manner. It has NOTHING to do with the YUV space. I'm just using that demonstration, to show the bug in the converter. As I use pictures with straight and indexed colours, there shouldn't approach any problem. Particular BECAUSE the colour values were 100% exact to the destionation picture. It's just that the import "routine" simply doesn't care of colours. :Subroutine Just like "Hey , import routine, I have a red colour forya...." "No thanks, converter, it isn't needed." :endsub Quote Link to comment Share on other sites More sharing options...
emkay Posted June 25, 2012 Share Posted June 25, 2012 OK. A small riddle. But first an explanation that in the 1st picture the "Moon" got a different colour, which was sorted in less than 600K evaluations. Let's see , when other solution bring the red colour into the right shape. Have a look at the destination picture and the imported graphics. Which one of them has the best "image shaping" ? Quote Link to comment Share on other sites More sharing options...
ilmenit Posted June 26, 2012 Author Share Posted June 26, 2012 It has NOTHING to do with the YUV space. I'm just using that demonstration, to show the bug in the converter. Emkay, EOT from my side. You don't want to understand how and why the converter works, while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari. What you perceive as "bug" is the random (by design) behaviour of RastaConverter. 1 Quote Link to comment Share on other sites More sharing options...
emkay Posted June 26, 2012 Share Posted June 26, 2012 It has NOTHING to do with the YUV space. I'm just using that demonstration, to show the bug in the converter. Emkay, EOT from my side. You don't want to understand how and why the converter works, while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari. What you perceive as "bug" is the random (by design) behaviour of RastaConverter. LOL Everytime the same. It's really getting boring to see people do only half baked stuff with the Atari. On the other side. To know the bugs helps to do workarounds... like puzzling... and I like puzzling Quote Link to comment Share on other sites More sharing options...
emkay Posted June 26, 2012 Share Posted June 26, 2012 (edited) Repeatable results show that I'm right. 100000 evaluations to have this: from this: ... with a very low banding. If you want to create a movie, the speedup is tremendous. Edited June 26, 2012 by emkay Quote Link to comment Share on other sites More sharing options...
+Philsan Posted June 26, 2012 Share Posted June 26, 2012 Originals: Atari: 7 Quote Link to comment Share on other sites More sharing options...
analmux Posted June 26, 2012 Share Posted June 26, 2012 ... while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari. What you perceive as "bug" is the random (by design) behaviour of RastaConverter. Then (I'm a bit lazy trying to find an explanation in this thread): Why do you use this random behaviour then? What's the advantage? Or, did you already explain this earlier here in this thread? .... and can the user select / activate or deselect / deactivate this feature? Quote Link to comment Share on other sites More sharing options...
Xuel Posted June 26, 2012 Share Posted June 26, 2012 ... while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari. What you perceive as "bug" is the random (by design) behaviour of RastaConverter. Then (I'm a bit lazy trying to find an explanation in this thread): Why do you use this random behaviour then? What's the advantage? Or, did you already explain this earlier here in this thread? .... and can the user select / activate or deselect / deactivate this feature? RastaConverter uses a heuristic Stochastic Hill Climbing algorithm. See also the help.txt file. There's no way to disable the randomness, but it might be worth adding a random seed option which would allow you to reproduce previous results by just providing the same random seed. Quote Link to comment Share on other sites More sharing options...
emkay Posted June 27, 2012 Share Posted June 27, 2012 We get closer to the logical bug. The A8 palette is an indexed palette with defined colours and fixed brightness values. Using that "Hill climbing" always tries to put colours in that were wrong. Then the palette usage gets the result "unsharp" which results in not handled pixels and image details. On the C64 they build that imaginary brightness value to every colour, to have all colour handled "image exact". But this isn't the way on the A8.... My examples show, how to make the "hill climbing" handling the "leftout" image details... It's just like turning the hill into a sea and the Rock into water. So the missed out details get handled 1st. Then the Source image gets changed back into "Hill of Rocks" and the image gets much less banding and more visual details. Quote Link to comment Share on other sites More sharing options...
emkay Posted June 27, 2012 Share Posted June 27, 2012 Look at the Blood Money title... Some 100K evaluations, and very low progress with now 200M evaluations. The moon (?) still shows banding. And, please don't tell me at the right side of that image there is no CPU time left to change 1 colour. As in an other picture I've proven there is time. It's just that logical bug... For S=1 to 10000 We could change the colour We will change it Hm... colour isn't different enough OK. Drop it. Hey, we need a different colour there Next S Quote Link to comment Share on other sites More sharing options...
Heaven/TQA Posted June 27, 2012 Share Posted June 27, 2012 Emkay, is the source code not available? Grab it and modify on your own and play around with the code to see if it helps? I am not good enough to judge if the converter does good or bad things because it is a tool which handles all kind of pictures. Maybe we find a way to improve but would a science approach not be painting some test pictures and use them to improve the converter and spot some issues (like with Altirra and Avery's test apps) instead of grabbing randomly choosen gfx on the net? Quote Link to comment Share on other sites More sharing options...
ilmenit Posted June 27, 2012 Author Share Posted June 27, 2012 (edited) Showing some pics and making some strange assumptions is pointless. I'm open to any reasonable ideas (or pseudocode) for the conversion algorithm or improvements. To propose improvements you really should understand how the converter currently works. Without it all the discussion will be meaningless like above. To define the problem complexity: - We have 3 basic registers (X,Y,A) that can take values 0-255. - We have 12 registers (4 playfield colors, 4 player positions, 4 player colors) can be changed through basic registers: A=value (LDA value, 2 cycles) COLOR0=A (STA COLOR0, 4 cycles) - We have 54 cycles per line. - We know where each CPU cycle is placed on the screen. How the screen is created. - To simplify the process sprites are always quad width. Let's try to calculate the brute force requirements: 1 (NOP) + 3 (LDx)*256 colors + 3*(STx)*12(regs) = 805 possible instruction combinations. LDx and STx have different length and the average kernel program has 17 instructions. For 240 lines we have 805^(17*240)=805^4080 combinations. Limiting the LDx values only to the colors appearing in a picture (f.e. 10) would limit the search space to "only" 67^4080 combinations. Edited June 27, 2012 by ilmenit Quote Link to comment Share on other sites More sharing options...
Rybags Posted June 27, 2012 Share Posted June 27, 2012 (edited) An improvement I could suggest (if not already done) - if a dominant colour is found in a scanline and has a "spread" ie coverage over left to right, then reserve/set it and exclude from further calculations for that line. Possibly there could be paramaters to change threshold/range or exclude the method from the algorithm. Of course, the "colour" for purpose of discussion, is an A8 palette value and the pixels involved are acceptably close to that. Another, although it might be cumbersome - Multithreading, with the picture split among those threads for multicore users. Edited June 27, 2012 by Rybags Quote Link to comment Share on other sites More sharing options...
ilmenit Posted June 27, 2012 Author Share Posted June 27, 2012 (edited) An improvement I could suggest (if not already done) - if a dominant colour is found in a scanline and has a "spread" ie coverage over left to right, then reserve/set it and exclude from further calculations for that line. Imagine a situation where the first line has 3 colours, all of them are greys, and the second line has many different colours (none of them is gray). It may be better not to freeze the dominant colour but to average thouse 3 greys to a single grey and to ready colour registers for the second line, without using them in the first line, to have global similarity better. Unfortunatelly you can't change all the register whenever you want. All the changes are happening when the screen is drawn. This case is very common - RastaConverter averages colours to minimize picture distance when it needs more colours somewhere else. Edited June 27, 2012 by ilmenit Quote Link to comment Share on other sites More sharing options...
emkay Posted June 28, 2012 Share Posted June 28, 2012 This case is very common - RastaConverter averages colours to minimize picture distance when it needs more colours somewhere else. So much theory... so much errors. The Importer build errors... which makes it even impossible to import some 4 colour picture at 160x200 pixel. Now imagine, more colours, more variations of detaile.... the errors get multiple and won't be solved, just handled and handled again... Quote Link to comment Share on other sites More sharing options...
ilmenit Posted June 28, 2012 Author Share Posted June 28, 2012 Emkay, you start to be not only ignorant but also irritating. Quote Link to comment Share on other sites More sharing options...
Wrathchild Posted June 28, 2012 Share Posted June 28, 2012 Emkay, at least assist people by posting your original source image for people to try against. Why not 'ring' the pixels/areas on the you were concerned about to highlight the problem. I can't know what's wrong immediately. Looking at the screen shot this appears evident: Picture posted is a jpg and so doesn't display correctly. Is your source picture not using the exact same colours as the laoo pallete? You're using an option 'dither=chess', why? Set to none or leave out. Quote Link to comment Share on other sites More sharing options...
ilmenit Posted June 28, 2012 Author Share Posted June 28, 2012 (edited) With gif file and options: RastaConverter.exe koronis.gif /filter=box /h=200 /init=less we get the 1:1 output in about 20K evaluations. koronis.xex Edited June 28, 2012 by ilmenit Quote Link to comment Share on other sites More sharing options...
Wrathchild Posted June 28, 2012 Share Posted June 28, 2012 With gif file and options: Beat me too it (first time around it actually got there in 7K evaluations ) png image converted to Laoo palette, tip: read the tutorial earlier on this thread regarding Timanthes. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.