The difference between a computer image and a computer screen is not one that should be easy to make, according to computer science graduate student Jie Chen, who has been developing a new way to identify them.
The process starts with identifying the pixels in the computer image, which can be either a single colour or a series of coloured dots.
For instance, in a computer program, it is possible to select a single pixel from a series.
The computer screen has a series number, and a number that indicates how many pixels in that series number are visible to the user.
However, it does not have to be the whole number of pixels visible, and can be any number of other pixels visible to a user.
“There are two kinds of pixels that can be used: the ones that are part of the image, and those that are not,” Chen says.
“So you can have a series where there are four pixels and the other four pixels are white.”
The number of white pixels visible can be represented by an integer that is an odd number of bits (a decimal number).
For example, if a screen has 64 white pixels, the screen’s number of integer bits is 4.
And if the screen has 128 white pixels then the screen will have 4,128 integer bits.
“That means you can get a result of 4.0 and a result with 0.0 by multiplying 4.4 by 0.024.
If you multiply 4.3 by 0, that gives you the same result,” Chen explains.
The next step is to identify the pixels on a computer monitor or other display device.
For instance, Chen has developed a way of identifying pixels on the screen that are invisible to the human eye, using the same technique that computers use to determine the resolution of images.
For Chen’s approach, he has found a way to detect pixels that are in the visible range, which means that they are between 0.5 and 1.0 metres in size.
For example: a pixel that is at the edge of a screen, or one that is above the horizontal line.
The result is that it is a very good way to determine which pixels are visible.
Chen also has developed an algorithm to distinguish between different types of pixels on different displays.
“The algorithm I use is called pixel-weighted computing,” he says.
“It’s a technique that lets you pick pixels that have a small difference in their pixel brightness, and they’re a little bit brighter than other pixels.
For this, the algorithm takes a pixel and weighs it in terms of its pixel brightness.”
Using the pixel-powering technique, Chen says that he has been able to determine that some pixels on screen are more likely to be visible, compared with other pixels on other displays.
It is possible, he says, to pick a set of pixels and use that to identify different colours.
“So the idea is that you can pick the pixel that you’re looking at, and you can then choose a colour and it will be more easily identifiable than other colours,” Chen explained.
“You can pick a single bit of the colour, and if that’s the case, then that’s what’s important.
If that’s not the case then you can just choose a different pixel, which is more likely than not to be a pixel in the same colour as that pixel.”
It is not clear whether Chen’s method will have wide-ranging applications, but the potential of this technology is exciting, he adds.
“In my opinion, this could be the basis of a lot of other things.
So if you have a lot more data about a lot fewer devices, you can start to see a lot better algorithms that you could use,” he said.