Jump to content
IGNORED

Quantizator


ilmenit

Recommended Posts

 

There is no logical bug AFAIK - the process is random.

 

There is a logical bug. Probably it has to do with the vaules of the colours themselves and there is no intention.

 

 

Look at the picture....

 

They both were based on the same amount of 17 indexed colours.

 

post-2756-0-51357500-1340615435_thumb.jpg

 

The lower one has 587 million evaluations and dist. of 6.45

 

The upper one hae only 66M evaluations and a better distance of 5.72

 

Only the values of the colours have changed.

 

 

Link to comment
Share on other sites

A Detail-Speedup helper is easily to build...

 

It's something like "Phase Altering Line"

 

 

post-2756-0-92415400-1340620082_thumb.jpg

 

52K evaluations, 0,5 distance.... up to 8 colours per scanline.

 

This picture can be named "finished"

 

After this is reached, and you want to have more details, just put the picture in , non-inverted....

 

to have this:

post-2756-0-16327500-1340620396_thumb.png

 

from this:

post-2756-0-74635100-1340620461_thumb.png

 

In less than 9M evaluations.

 

In that state, the converter handles the details and colours with a more exact "care"...

Edited by emkay
Link to comment
Share on other sites

And where the bug is?

1. Colour distance is calculated in YUV (default) space. For different colours the distance is different. Having a smaller distance does not mean that for human eye the picture is always more or less similar.

2. The process is random. Run a few instances and choose the best result.

Link to comment
Share on other sites

For different colours the distance is different.

 

This is the bug, if this means that "blue" has a higher priority in the distance value than "red".

 

If this is so, the generator would do better to draw red parts 1st, then green parts and blue parts last. (for example)

Link to comment
Share on other sites

For different colours the distance is different.

 

This is the bug, if this means that "blue" has a higher priority in the distance value than "red".

 

If this is so, the generator would do better to draw red parts 1st, then green parts and blue parts last. (for example)

Is that really a bug though? YUV colourspace is somewhat odd in this manner.

Link to comment
Share on other sites

This is the bug, if this means that "blue" has a higher priority in the distance value than "red".

 

Emkay, read some more about the color spaces... I won't copy-paste all the theory about it here.

Use /distance=euclid if you don't like the YUV space but expect visually worse results.

Link to comment
Share on other sites

For different colours the distance is different.

 

This is the bug, if this means that "blue" has a higher priority in the distance value than "red".

 

If this is so, the generator would do better to draw red parts 1st, then green parts and blue parts last. (for example)

Is that really a bug though? YUV colourspace is somewhat odd in this manner.

 

It has NOTHING to do with the YUV space. I'm just using that demonstration, to show the bug in the converter.

 

As I use pictures with straight and indexed colours, there shouldn't approach any problem. Particular BECAUSE the colour values were 100% exact to the destionation picture.

It's just that the import "routine" simply doesn't care of colours.

 

 

:Subroutine

Just like "Hey , import routine, I have a red colour forya...." "No thanks, converter, it isn't needed."

:endsub

 

;)

Link to comment
Share on other sites

OK. A small riddle.

 

But first an explanation that in the 1st picture the "Moon" got a different colour, which was sorted in less than 600K evaluations.

 

Let's see , when other solution bring the red colour into the right shape.

 

Have a look at the destination picture and the imported graphics. Which one of them has the best "image shaping" ?

 

post-2756-0-09428300-1340642603_thumb.jpg

 

 

Link to comment
Share on other sites

It has NOTHING to do with the YUV space. I'm just using that demonstration, to show the bug in the converter.

 

Emkay, EOT from my side. You don't want to understand how and why the converter works, while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari.

What you perceive as "bug" is the random (by design) behaviour of RastaConverter.

  • Like 1
Link to comment
Share on other sites

It has NOTHING to do with the YUV space. I'm just using that demonstration, to show the bug in the converter.

 

Emkay, EOT from my side. You don't want to understand how and why the converter works, while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari.

What you perceive as "bug" is the random (by design) behaviour of RastaConverter.

 

LOL

Everytime the same.

It's really getting boring to see people do only half baked stuff with the Atari.

On the other side. To know the bugs helps to do workarounds... like puzzling... and I like puzzling ;)

 

 

Link to comment
Share on other sites

... while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari. What you perceive as "bug" is the random (by design) behaviour of RastaConverter.

Then (I'm a bit lazy trying to find an explanation in this thread): Why do you use this random behaviour then? What's the advantage? Or, did you already explain this earlier here in this thread? .... and can the user select / activate or deselect / deactivate this feature?

Link to comment
Share on other sites

... while everything is described in this topic and in the manual. You also seem to forget the graphics capabilities of the 8bit Atari. What you perceive as "bug" is the random (by design) behaviour of RastaConverter.

Then (I'm a bit lazy trying to find an explanation in this thread): Why do you use this random behaviour then? What's the advantage? Or, did you already explain this earlier here in this thread? .... and can the user select / activate or deselect / deactivate this feature?

 

RastaConverter uses a heuristic Stochastic Hill Climbing algorithm. See also the help.txt file.

 

There's no way to disable the randomness, but it might be worth adding a random seed option which would allow you to reproduce previous results by just providing the same random seed.

Link to comment
Share on other sites

We get closer to the logical bug.

 

The A8 palette is an indexed palette with defined colours and fixed brightness values.

 

Using that "Hill climbing" always tries to put colours in that were wrong. Then the palette usage gets the result "unsharp" which results in not handled pixels and image details.

On the C64 they build that imaginary brightness value to every colour, to have all colour handled "image exact".

 

But this isn't the way on the A8....

 

My examples show, how to make the "hill climbing" handling the "leftout" image details...

 

It's just like turning the hill into a sea and the Rock into water. So the missed out details get handled 1st. Then the Source image gets changed back into "Hill of Rocks" and the image gets much less banding and more visual details.

 

 

Link to comment
Share on other sites

Look at the Blood Money title...

 

post-2756-0-77711200-1340779262_thumb.png

 

Some 100K evaluations, and very low progress with now 200M evaluations. The moon (?) still shows banding. And, please don't tell me at the right side of that image there is no CPU time left to change 1 colour. As in an other picture I've proven there is time.

It's just that logical bug...

 

For S=1 to 10000

We could change the colour

We will change it

Hm... colour isn't different enough

OK. Drop it.

Hey, we need a different colour there

 

Next S

 

 

 

 

Link to comment
Share on other sites

Emkay, is the source code not available? Grab it and modify on your own and play around with the code to see if it helps?

 

I am not good enough to judge if the converter does good or bad things because it is a tool which handles all kind of pictures. Maybe we find a way to improve but would a science approach not be painting some test pictures and use them to improve the converter and spot some issues (like with Altirra and Avery's test apps) instead of grabbing randomly choosen gfx on the net?

Link to comment
Share on other sites

Showing some pics and making some strange assumptions is pointless.

I'm open to any reasonable ideas (or pseudocode) for the conversion algorithm or improvements.

To propose improvements you really should understand how the converter currently works. Without it all the discussion will be meaningless like above.

 

To define the problem complexity:

- We have 3 basic registers (X,Y,A) that can take values 0-255.

- We have 12 registers (4 playfield colors, 4 player positions, 4 player colors) can be changed through basic registers:

A=value (LDA value, 2 cycles)

COLOR0=A (STA COLOR0, 4 cycles)

- We have 54 cycles per line.

- We know where each CPU cycle is placed on the screen. How the screen is created.

- To simplify the process sprites are always quad width.

 

Let's try to calculate the brute force requirements:

1 (NOP) + 3 (LDx)*256 colors + 3*(STx)*12(regs) = 805 possible instruction combinations.

LDx and STx have different length and the average kernel program has 17 instructions. For 240 lines we have 805^(17*240)=805^4080 combinations.

Limiting the LDx values only to the colors appearing in a picture (f.e. 10) would limit the search space to "only" 67^4080 combinations.

Edited by ilmenit
Link to comment
Share on other sites

An improvement I could suggest (if not already done) - if a dominant colour is found in a scanline and has a "spread" ie coverage over left to right, then reserve/set it and exclude from further calculations for that line.

Possibly there could be paramaters to change threshold/range or exclude the method from the algorithm.

Of course, the "colour" for purpose of discussion, is an A8 palette value and the pixels involved are acceptably close to that.

 

Another, although it might be cumbersome - Multithreading, with the picture split among those threads for multicore users.

Edited by Rybags
Link to comment
Share on other sites

An improvement I could suggest (if not already done) - if a dominant colour is found in a scanline and has a "spread" ie coverage over left to right, then reserve/set it and exclude from further calculations for that line.

 

Imagine a situation where the first line has 3 colours, all of them are greys, and the second line has many different colours (none of them is gray).

post-22831-0-94959800-1340794080_thumb.png

It may be better not to freeze the dominant colour but to average thouse 3 greys to a single grey and to ready colour registers for the second line, without using them in the first line, to have global similarity better. Unfortunatelly you can't change all the register whenever you want. All the changes are happening when the screen is drawn.

 

This case is very common - RastaConverter averages colours to minimize picture distance when it needs more colours somewhere else.

Edited by ilmenit
Link to comment
Share on other sites

 

This case is very common - RastaConverter averages colours to minimize picture distance when it needs more colours somewhere else.

 

So much theory... so much errors.

 

The Importer build errors... which makes it even impossible to import some 4 colour picture at 160x200 pixel.

 

post-2756-0-92534600-1340868979_thumb.jpg

 

Now imagine, more colours, more variations of detaile.... the errors get multiple and won't be solved, just handled and handled again...

Link to comment
Share on other sites

Emkay, at least assist people by posting your original source image for people to try against.

Why not 'ring' the pixels/areas on the you were concerned about to highlight the problem. I can't know what's wrong immediately.

 

Looking at the screen shot this appears evident:

Picture posted is a jpg and so doesn't display correctly.

Is your source picture not using the exact same colours as the laoo pallete?

You're using an option 'dither=chess', why? Set to none or leave out.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...