Saturday, 14 April 2018

That does not mean everything is useless.

Everything we know is wrong, has always been, and will always be.

--


--

My experience is that I cannot really control my emotions, but neither do they control me. At least not in any direct manner. I rationally choose how I act even if I can't choose how I feel. Feelings appear to be a consequence of the circumstances. For example, external demands can cause stress if they are correlated with my interests and anti-interests in appropriate manner. State of my body can also have some impact on my feelings and their spectrum, but generally speaking such also doesn't dictate how I act. I may not be able to suppress the tremors my body makes when I'm nervous, but that doesn't prevent me from doing what I'm motivated to do (unless it's brain surgery).

All motivation seems to more or less stem from emotions so without emotions and motivation there would be nothing for the rational mind to do. Rational mind seems to be there only to solve an optimization problem of how to best maximize the expectation value of emotional gratification.

All in all, I'm not sure anyone really controls their emotions, but some may find themselves on paths that lead to apparent feeling of control which will of course correlate with higher feeling of happiness. Some may call such crossroads a choice.

So I guess one could say that I have tough time believing in free will. I don't think I've ever heard a coherent definition of what such a thing could be. Where does it come from? How does it work? How could it exist?

--

Not everyone is biased, some people are just plain wrong.

--

What philosophy is, is a study of "what can be". It’s a study that must precede all other studies of science and engineering, for what science studies is "what is", and what engineering does is answers "how do we make it". Unfortunately, philosophy has never really deemed that anything could ever be or not to be. Never the less, "what is?" and "how do we make it?" have been extremely useful questions.

Monday, 9 April 2018

The philosophers have only interpreted the world, in various ways. The point, however, is to change it.

...according to Marx.

A laser (red) is fired from the center location of here and now. According to Penrose diagram, which compresses spatial and temporal distance so that edges of this diagram represent infinity, and retains the 45 degree angle for light regardless of spacetime curvature, we ought to see that the time it takes for light reflected from a mirror close to the event horizon approaches infinity as the mirror gets closer and closer to the event horizon.

Four shades of gray random dithering, four shade Floyd-Steinberg and
the original 256 shade image.

FS dithering with 2 bits per color (RGB have 4 different values) with
total of 64 different colors and original 16 777 216 color image.

M = 4;
pixel = imread('david.png');
figure; image(pixel); colormap(gray(256)); truesize;
pixel = (M-1)*double(pixel)/255;

[Y X] = size(pixel);
for y = 1+1:Y-1
    for x = 1+1:X-1
        oldpixel = pixel(y, x);
        newpixel = round(oldpixel);
        pixel(y, x) = newpixel;
        quant_error = oldpixel - newpixel;
        pixel(y,   x+1) = pixel(y,   x+1) + quant_error*7/16;
        pixel(y+1, x-1) = pixel(y+1, x-1) + quant_error*3/16;
        pixel(y+1, x)   = pixel(y+1, x)   + quant_error*5/16;
        pixel(y+1, x+1) = pixel(y+1, x+1) + quant_error*1/16;
    end
end

figure;
image(uint8(255*pixel/(M-1)));
colormap(gray(256));
truesize;


What is the purpose of life?
...to explore and have fun forever?

Saturday, 10 March 2018

Grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference.

"Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the 'wet streets cause rain' stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know." - Michael Crichton
--

Single resistor VGA output from an FPGA. VGA pins 13 and 14, vertical and horizontal sync are connected directly to the FPGA (Altera Cyclone II), pins 1-3 are RGB and connected all parallel and through a 100 ohm resistor to the FPGA. Pins 6-8 are ground.

PLL is running at 25 MHz giving monochrome VGA output. Adding color would be relatively trivial.

magick convert -fuzz 40% -colors 8 -layers Optimize -delay 5 output.gif out.gif

module test(clock, out0i, out1i, out0q, out1q);

input clock;
output out0i;
output out1i;
output out0q;
output out1q;

reg vga_HS, vga_VS;
reg [9:0] CounterX;
reg [8:0] CounterY;
reg [9:0] temporary;
reg [9:0] cntx;
reg [9:0] cnty;
reg pixvalue;

reg [7:0] fnt[6399:0];
initial $readmemh("fnt.hex", fnt);

always @(posedge pll_clock)
begin
CounterX <= CounterX + 1;
if(CounterX==800)
begin
CounterX <= 0;
CounterY <= CounterY + 1;
end

if(CounterY==449)
begin
CounterY <= 0;
temporary <= temporary + 1;
end
vga_HS <= (CounterX>640+16 & CounterX<800-48);
vga_VS <= (CounterY>400+12 & CounterY<449-35);
cntx <= CounterX + temporary;
cnty <= CounterY + (temporary >> 2);
if(CounterY>8)
pixvalue <= ((CounterX<640 & CounterY<400) & (CounterX[2] & CounterY[2])) | ((CounterX>128 & CounterX<512 & CounterY>64 & CounterY<300) & (cntx[4] & cnty[4]));
else
if(CounterX<640) pixvalue <= (fnt[CounterX+CounterY*760]>1);
end

assign out0i = ~vga_HS;
assign out1i = ~vga_VS;
assign out0q = pixvalue;

altpll0 altpll0 (
.inclk0(clock),
.c0(pll_clock)
 );

endmodule
Output from a 1k resistor ladder DAC (3 resistors) with Cyclone II.
Driven by LVDS serializers at 500 MHz.

--

I think the concept of "culture" as used today should be clearly separated from the concept of religion. Religions are for the most part dogmas, quite different from culture which consists of temporary trends and behavioral models. Pretty much everyone knows the difference, but for some reason it is often ignored. We can judge the historical mistakes without holding anyone responsible for the sins of their fathers, but it is only natural and practical to judge the ideas and behaviors we now clearly see as inferior.
--

Simulation of quadrature signals from parametrically amplified quantum noise in a superconducting tunnel junction.
--

Stability through slavery.

Monday, 12 February 2018

LG OLED557V, an excellent panel, but a bit fucked up scaler

I recently purchased LG OLED55B7V. It's a great 3840x2160 panel with 12 bpc (36 bits per pixel) color depth, perfect blacks, HDR and 120 Hz refresh rate. However, looks like the scaler has some problem with chrominance (when the video is not natively 4k). The same problem exists for all setting, and all videos regardless of their bitrate, but it's easiest to see with low resolution videos and high color saturation. This is an expensive OLED device so I would have expected the scalers to match the panel.

The TV doesn't seem to be doing a good job at rendering videos. Pure colors tend to appear very blocky.
The flaw is a little bit odd because it doesn't quite correlate with the chrominance resolution, but is instead more blocky than one would expect from simply incorrect rendering of chrominance.
The clip is from a movie called The Black Hole (1979).
The same video rendered by VLC in Windows 10 (connected with HDMI).
No such annoying blocking is visible.

The source image is a single red line (2 pixels in diameter). Clearly the TV applies some kind of nasty filter on it. This effect is not visible when the video is natively 4k.
ffmpeg -r 24 -f image2 -i rb.png -filter_complex loop=399:1 -pix_fmt yuv420p -vcodec libx264 -crf 1 rb.mp4 
WebOS
VLC in Windows 10 (connected with HDMI).
The original image used in these tests.
ffmpeg -r 24 -f image2 -i image.png -filter_complex loop=799:1 -an vcodec mpeg4 -qscale 1 video.mp4
It would also be very nice if there were connections which allowed one to actually plug into the panel directly in such a manner that one could use the full potential of the panel (4k/12bpcRGB/120Hz). Apparently the version of the HDMI standard used allows only 4k/8bpcRGB/60Hz, 4k/12bpcYUV420/60Hz or lesser resolutions with higher refresh rate. Never the less, it's the first time in a couple of decades I've been happy with the black levels. Luckily rendering problems can always be avoided by simply using a PC.

--

Nothing to do with my new TV, but an association developed to the stupid gamma error in most picture scaling software which is nothing much more than due to the fact that pixel brightness (power) is proportional to its value (voltage) squared and most scaling is done on pixel values rather than on pixel brightness. The result is this.


The left image is the original. Middle one is scaled by averaging pixels values.
The right one is scaled by correctly averaging pixel powers.



Sometimes the result of incorrect scaling is significantly distorted colors.
The left image is the original. Middle one scaled by averaging pixels values.
The right one is scaled by averaging pixel powers.

RGBA = double(imread('gamma_dalai_lama_gray.jpg')); figure; image(RGBA/255); truesize;
RGBB = double(imread('gamma_colors.jpg')); figure; image(RGBB/255); truesize;

RGB1 = imresize(RGBA, 0.5, 'bicubic')/255;
RGB2 = abs(sqrt(imresize(RGBA.^2, 0.5, 'bicubic')))/255;

RGB3 = imresize(RGBB, 0.5, 'bicubic')/255;
RGB4 = abs(sqrt(imresize(RGBB.^2, 0.5, 'bicubic')))/255;

figure; image(imresize(RGB1, 2, 'nearest')); truesize; imwrite(RGB1, '001.jpg');
figure; image(imresize(RGB2, 2, 'nearest')); truesize; imwrite(RGB2, '002.jpg');
figure; image(imresize(RGB3, 2, 'nearest')); truesize; imwrite(RGB3, '003.jpg');
figure; image(imresize(RGB4, 2, 'nearest')); truesize; imwrite(RGB4, '004.jpg');

--

While we're at it, let's talk about YUV420. This is a signal format used in video compression as human vision is much less sensitive to changes in color than it is to changes in brightness. Instead of recording the primary colors red, green and blue, it records luminance (which is overall brightness of the pixel) with full resolution and chrominance (which records color separate from luminance) with half the original resolution. So for example 4k material recorded in YUV420 as usual has a resolution of 3840x2160 for luminance and a resolution of 1920x1080 for chrominance.

As usual these images can be rendered correctly and incorrectly. Since most display devices are natively RGB these images are eventually always converted back to RGB, but this conversion can be done poorly. A typical mistake is that while rendering the RGB data the chrominance is not interpolated. This results in images where colors are blocky. This is especially apparent in situations where the luminance is the same for both colors, but the colors are different. Below there is an example of how this effect looks like.

Improperly rendered YUV420 image data. Luminance is rendered properly, but the chrominance data is without interpolation and makes the image look much blockier than with proper interpolation. This effect is visible only in regions with high color saturation and especially in regions where there is no difference in the luminance of the signal, but large differences in the chrominance.


The original RGB image vs. properly rendered YUV420 where the chrominance signal is interpolated to match the luminance resolution.

RGB = imread('chroma3.png');
YCBCR = rgb2ycbcr(RGB);
YCBCR1 = YCBCR;
YCBCR2 = YCBCR;
N = 2;
YCBCR1(:, :, 2) = imresize(imresize(YCBCR(:, :, 2), 1/N, 'bicubic'), N, 'nearest');
YCBCR1(:, :, 3) = imresize(imresize(YCBCR(:, :, 3), 1/N, 'bicubic'), N, 'nearest');
YCBCR2(:, :, 2) = imresize(imresize(YCBCR(:, :, 2), 1/N, 'bicubic'), N, 'bicubic');
YCBCR2(:, :, 3) = imresize(imresize(YCBCR(:, :, 3), 1/N, 'bicubic'), N, 'bicubic');
RGB2 = ycbcr2rgb(YCBCR1);
RGB3 = ycbcr2rgb(YCBCR2);
--

So let's have a quick look at 30 bit HDR material as well. The compression codec these videos typically use is H.265 also known as HEVC. Pixel format is YUV420p10le where le refers to little endian (signifying order of bit significance). While standard dynamic range is 256 different values given by 8 bpc (8 bits for each of the three colors), the high dynamic range is typically in these videos 1024 different values (10 bpc). The main advantage typically being that more detail can exist in the extremely dark and extremely bright parts of the image.







If you consider the middle image as the standard range of light levels your camera would see, then you can observe that it is missing details both extremely bright and extremely dark or in other words the image is simultaneously overexposed and underexposed. Having additional bits allows one to capture the details in these missing regions. The middle image shows middle 8 bits of the total of 10 and top and bottom show the brightest and darkest 8.

Suppose your standard dynamic range captures brightness values between the blue bars. Additional bits allow capture of the information outside of your standard range.

The increased number of bits allows more brightness values which can either be used to increase the dynamic range (more darks and brights) or decrease the minimum distance between different brightness values or some compromise between these two.


Top most picture "simulates" (by exaggerated banding and limited maximum brightness) standard dynamic range with standard number of bits. Limited number of bits results in banding of the brightness values in image which would ideally be continuous change from total darkness to maximum brightness. Middle picture "simulates" a situation with standard dynamic range with two extra bits used to decrease the minimum spacing between brightness values. The third picture "simulates" high dynamic range with two extra bits. One of the bits is used to decrease minimum spacing between brightness values and one to increase the dynamic range now allowing colors brighter than standard dynamic range. This type of high dynamic range still shows more banding than standard dynamic range with two more bits, but less banding than standard dynamic range with standard number of bits. These cases are analogous to SDR8, SDR10 and HDR10.

I'm personally quite capable of seeing the banding in regular SDR8 stripes. Though, I will admit that under normal circumstances when the image contains certain amount of noise, it results in dithering which pretty much masks any banding that might be otherwise visible.


Both of the images above have the same amount of colors.
The lower one is just "dithered" in such a way that spatial pixel noise reflects continuous gradient.



One could also dither in time by making the pixels flicker at suitable duty cycles.
Of course combining some noise improves the result significantly.



close all
clear all

A = 0; 
% A = 0.1;
B = 1;
% B = 0.15;
M = 16;
% M = 256;
X = 1200;
Y = 256;
img = zeros(Y, X);
for x = 1:Y
    img(x, :) = linspace(A, B, X);
end

figure;
im = round((M-1)*img);
image(uint8(im));
colormap(gray(M));
truesize;

pix = zeros(Y, X);
for y = 1:Y
    for x = 1:X
        p = (M-1)*img(y, x);
        q = floor((M-1)*img(y, x));
        if rand>p-q
            pix(y, x) = q;
        else
            pix(y, x) = q + 1;
        end
    end
end

figure;
image(uint8(pix));
colormap(gray(M));
truesize;
pixx = pix;

pix = zeros(Y, X);
for y = 1:Y
    for x = 1:X
        p = (M-1)*img(y, x);
        q = floor((M-1)*img(y, x));
        if rand>p-q
            v = q;
        else
            v = q + 1;
        end
        m = round(randn/2);
        while abs(m)>1
            m = round(randn/2);
        end
        if v>0 && v<M-1
            pix(y, x) = v + m;
        else
            pix(y, x) = v;
        end
    end
end

figure;
image(uint8(pix));
colormap(gray(M));
truesize;

figure; hold on;
plot(mean(pix, 1));
plot(mean(pixx, 1));

Eight shades of gray.

Sunday, 7 January 2018

One of my greatest fears is that one day I'll wake up in a gridlock I predicted ages ago

Einstein field equations (EFE)



describe the four dimensional geometry g associated with the points of spacetime and corresponding stress-energy tensor T. The differential terms are a function of the metric as described below.


Variable g with upper indices (contravariant) is the inverse of the (covariant) metric.


T describes flux of four momentum between different dimensions, including time. The first term (with upper indices tt) is simply the invariant mass-energy. For a single isolated particle the following (in natural units) applies (t-direction component of 4-dimensional v is c, v with lower index 3 represent the regular 3-dimensional velocity vector).


Contravariant stress-energy tensor can be converted into covariant form by the following.


EFE are a kind of generalization of Newtons gravitational potential (Poisson equation).


EFE describe a kind of 4-dimensional array of matrices (in naive numerical sense) corresponding to the structure of such spacetime. This sort of block universe can be seen as static. Theoretically it's possible that in this sort of block there could exist "flows" and geometries that form a closed loop in what we would normally call time (allowing a kind of time travel into the past). The naive interpretation of EFE would suggest that if such loops exist, they ought to form consistent spacetime. That is to say in order for them to be a solution to the equations, no grandfather paradoxes can form. You will (for one reason or another) only do such things in the "past" that are consistent with the block as a whole. Quantum mechanics may significantly complicate the situation if something along the lines of these loops are possible, but instead lead to alternative realities. The block of all universes must still be consistent as a whole, but as there are now several alternative realities and bridges between them, the consistency requirement can still apply and allow some particular futures to be consistent with time traveller "causing" them.

How to actually compute a solution to some practical problem with EFE is a topic for some other time.
--
The end justifies the means as long as the means are factored into the end.

If you could actually rationalize your opinion, it wouldn't be an opinion.

I don't really care about subjective changes all that much, it's the objective ones that I find interesting.

Every January I remember my mortality, but then I forget it for a while again.

Sometimes I consider will a flaw.

What to do when you reach a conclusion that is highly likely to be sound, yet highly undesirable?

...it's just not in my nature.

A true explanation must come to an end where no further explanations are required and none can exist, but can it be?
--
BTW. The Great Philosophers by Stephen Law is pretty nice and compact book about philosophy.
--
double g[4][4][N*N*N*N]; /* spacetime metric tensors */
double T[4][4][N*N*N*N]; /* spacetime stress-energy tensors */

int npow(int i) {
  int val = 1;
  for(int j=0; j<i; j++)
    val = val*N;
  return val;
}

/* computes values of individual Christoffel symbols */
double christoffel(int i, int j, int l, int loc) {
  double val = 0.0, d;
  for(int k=0; k<4; k++) {
    d = \
      (g[k][i][loc + npow(j)] - g[k][i][loc - npow(j)]) +  \
      (g[k][j][loc + npow(i)] - g[k][j][loc - npow(i)]) -  \
      (g[i][j][loc + npow(k)] - g[i][j][loc - npow(k)]);
    val = val + g[l][k][loc]*d;
  }
  return 0.5*val;
}

/* computes values of individual components of Ricci curvature tensor */
double ricci(int i, int j, int loc) {
  double val = 0.0;
  double a = 0.0, b = 0.0;
  for(int l=0; l<4; l++) val += \
    christoffel(i, j, l, loc + npow(l)) - christoffel(i, j, l, loc - npow(l));
  for(int l=0; l<4; l++) val -= \
    christoffel(i, l, l, loc + npow(j)) - christoffel(i, j, l, loc - npow(j));
  for(int m=0; m<4; m++)
    for(int l=0; l<4; l++) val += \
      christoffel(i, j, m, loc)*christoffel(m, l, l, loc);
  for(int m=0; m<4; m++)
    for(int l=0; l<4; l++) val -= \
      christoffel(i, l, m, loc)*christoffel(m, j, l, loc);
  return val;
}

/* computes scalar curvature */
double scalar_curvature(int i, int j, int loc) {
  double gi[4][4], A[4][4], val = 0.0;
  double A2[4][4], A3[4][4], trA, trA2, trA3, det = 0.0;

  /* find metric tensor and initialize gi to zero */
  for(int i=0; i<4; i++)
    for(int j=0; j<4; j++) {
      A[i][j] = g[i][j][loc];
      gi[i][j] = 0.0;
    }

  /* compute g^2 and g^3 */
  for(int i=0; i<4; i++)
    for(int j=0; j<4; j++)
      for(int k=0; k<4; k++)
A2[i][j] += A[i][k]*A[k][j]; 
  for(int i=0; i<4; i++)
    for(int j=0; j<4; j++)
      for(int k=0; k<4; k++)
A3[i][j] += A2[i][k]*A[k][j]; 

  /* use Cayley-Hamilton method to compute inverse of g */
  trA  =  A[0][0] +  A[1][1] +  A[2][2] +  A[3][3];
  trA2 = A2[0][0] + A2[1][1] + A2[2][2] + A2[3][3];
  trA3 = A3[0][0] + A3[1][1] + A3[2][2] + A3[3][3];
  for(int i=0; i<4; i++) {
    gi[i][i] = (trA*trA*trA - 3*trA*trA2 + 2*trA3)/6.0;
    for(int j=0; j<4; j++)
      gi[i][j] += A2[i][j]*trA - A3[i][j] - 0.5*A[i][j]*(trA*trA-trA2);
  }

  /* division of gi such that (g*gi)_{00} = 1 */
  for(int k=0; k<4; k++)
    det += A[0][k]*gi[k][0];
  for(int i=0; i<4; i++)
    for(int j=0; j<4; j++)
      gi[i][j] /= det;

  /* compute Ricci scalar */
  for(int i=0; i<4; i++)
    for(int j=0; j<4; j++)
      val += gi[i][j]*ricci(i, j, loc);

  return val;
}