Complete tangent, what is going on with this image [1]? Render? AI? Too much post-processing? It has some computer game graphics look to me, but I can not quite put the finger on what seems off.
For years now all their images have this look, everything sharp at all distances. I enjoy it because it goes against the shallow depth of field trend that has been dominant, it’s refreshing. I think they achieve it by focus stacking, compositing multiple images focused at different distances.
I’m not sure if it’s AI so much as a composition of dozens of images stacked on top of each other. The shadows of different objects seem to be going in different directions.
The camera and headphones are composited in, pretty sure the skyline is shopped in as well (the shadows on the desk should be much harsher given the bright sky), same with what's on screen. The displays being mirrored for no reason doesn't exactly help sell the reality of it either.
A further bit of a tangent, but anyway: what really strikes me is the choice of such an image to represent whatever they're trying to convey. It feels bland, and there's a kind of underlying sadness to it... the books, the small sculpture, the shelf, the desk... it all drags me down.
I'm pretty sure the "fakeness" is intentional. The image seems designed to appeal to a specific target audience (when I look at their 'AI erase/replace tool' example I get a clear idea).
We may be witnessing a fascinating trend : AI images are making professional-grade imagery look like spam, and natural lighting and blurry images are becoming the new "human" esthetic.
Where do you see exponential blow-up? If you replace every function in an expression tree with a tree of eml functions, that is a size increase by a constant factor. And the factor does not seem unreasonable but in the range 10 to 100.
But that is not an increase in the expression size, that is the effort for searching for an expression tree that fits some target function. And that is no different from searching an expression based on common functions, that is of course also exponential in the expression tree height. The difference is that a eml-based tree will have a larger height - by some constant factor - than a tree based on common functions. On the other hand each vertex in an eml-tree can only be eml, one, or an input variable whereas in a tree based on common functions each vertex can be any of the supported basis functions - counting constants and inputs variables as nullary functions.
yes, and even this search doesn't actually require trillions of parameters, since the switching parameters will be sparse, which means you can apply a FakeParameter trick: suppose I want a trillion sparse parameters, thats a million by a million. Let's just model those parameters as inner products of a million vectors each of some dimension N. Now its in the regime of megabytes or a GB.
For extreme regularization, one can even go down to 10 arbitrary precision numbers: if we have a single vector of 10 dimensions, we can re-order the components 10! different ways.
10! = 3 628 800
so we can retrieve ~3M vectors from it, and we can form about 10 T inner products.
Ah I see I misunderstood your point, thanks for clarifying.
I think you are right, each node in some other expression tree formed of primitives in some set Prim would be increased by at most max(nodes(expr)) for expr in Prim.
That's essentially what the EML compiler is doing from what I understand.
So is everyone with enough power, every law requires enforcement. But even without enforcement or with the ability to outright block laws, being in violation of international law still matters. It informs others whether you truly belief in a rule-based order or whether you only use it as a tool if it benefits you and they will adjust their behavior accordingly. Also if you want support from others, if you are in violation of international law, the others will think twice if they should support you.
Some of the scenes from the video remind me of Manifold Garden [1] - only 3D but a 3-torus [2] and you can change the direction of gravity, i.e. what is up and down. And also visually beautiful.
The question that started this wasn't about clocks. It was about what happens when you remove every cultural assumption from timekeeping and ask: what's left?
This still measures the time of day in seconds since midnight. It still encodes the number of seconds into the common base 60 system of hours, minutes, and seconds. It still encodes the base 60 digits as base 10 numerals. The only differences are the choice of digits - regular polygons instead of an established set of digits like the Arabic digits - and the writing direction - increasing in scale, radially outwards instead of horizontally or vertically - defining the positional value of each digit.
Simply a dot moving around a circle once per day would have abandoned way more cultural assumption than this. Of course at the cost of making it harder to read precisely and looking less fancy.
This combination of base 60 and base 10 can also be understood as a multi-base numeral system. 12:34:56 can be understood as 123456 with non-uniform positional values 1, 10, 60, 600, 3,600, 36,000 from right to left directly yielding the number of seconds since midnight as 1 x 36,000 + 2 x 3,600 + 3 x 600 + 4 x 60 + 5 x 10 + 6 x 1 = 45,296.
The polygon numerals are actually similar to Babylonian cuneiform numerals [1]. They use a positional system just like Hindu-Arabic numerals with the positional value increasing by a factor of the base - 10 for Hindu-Arabic numerals, 60 for Babylonian cuneiform numerals - from right to left but there are not different digits 0 to 9 - or actually 0 to 59 because of base 60 - but they just repeat a symbol for one (I) [2] n times like the Roman numerals do. This IIII II is 42 but in base 60, so 4 x 60^1 + 2 x 60^0 = 242. Ignoring the edges, the polygon numerals express the digit value by repeating a vertex 0 to 9 times and each scale increase of the polygon adds a factor according to the 60 and 10 multi-base representation described above.
[2] Because repeating the symbol for one (I) up to 59 times is inconvenient, they have a symbol for ten (<) as a shortcut, just as the Roman numerals have V for IIIII. <II <<<IIII is (1 x 10 + 2 x 1) x 60^1 + (3 x 10 + 4 x 1) x 60^0 = 754.
How many seconds of video did they generate per day for those $15,000,000, i.e. what would it actually cost me to generate, say, a three minute music video for my garage band? This should probably take into account how many attempts I would likely need to arrive at something I am satisfied with.
Assuming I know what I want and am somewhat competent at describing it, I would guess ten times the final length should be plenty. If you are exploring different options, you can of course produce an unlimited amount of videos. But that is not really what I was referring to, I was more thinking of how many attempts it takes the model to produce what you want given a good prompt - I have never used it and have no idea if it nails it essentially every time or whether I should expect to run the same prompt ten times in order to get one good result.
How effective is property based testing in practice? I would assume it has no trouble uncovering things like missing null checks or an inverted condition because you can cover edge cases like null, -1, 0, 1, 2^n - 1 with relatively few test cases and exhaustively test booleans. But beyond that, if I have a handful of integers, dates, or strings, then the state space is just enormous and it seems all but impossible to me that blindly trying random inputs will ever find any interesting input. If I have a condition like (state == "disallowed") or (limit == 4096) when it should have been 4095, what are the odds that a random input will ever pass this condition and test the code behind it?
Microsoft had a remotely similar tool named Pex [1] but instead of randomly generating inputs, it instrumented the code to enable executing the code also symbolically and then used their Z3 theorem proofer to systematically find inputs to make all encountered conditions either true or false and with that incrementally explore all possible execution paths. If I remember correctly, it then generated a unit test for each discovered input with the corresponding output and you could then judge if the output is what you expected.
In practice I’ve found that property based testing has a very high ratio of value per effort of test written.
Ui tests like:
* if there is one or more items on the page one has focus
* if there is more than one then hitting tab changes focus
* if there is at least one, focusing on element x, hitting tab n times and then shift tab n times puts me back on the original element
* if there are n elements, n>0, hitting tab n times visits n unique elements
Are pretty clear and yet cover a remarkable range of issues. I had these for a ui library, which came with the start of “given a ui build with arbitrary calls to the api, those things remain true”
Now it’s rare it’d catch very specific edge cases, but it was hard to write something wrong accidentally and still pass the tests. They actually found a bug in the specification which was inconsistent.
I think they often can be easier to write than specific tests and clearer to read because they say what you actually are testing (a generic property, but you had to write a few explicit examples).
What you could add though is code coverage. If you don’t go through your extremely specific branch that’s a sign there may be a bug hiding there.
An important step with property based testing and similar techniques is writing your own generators for your domain objects. I have used to it to incredible effect for many years in projects.
I work at Antithesis now so you can take that with a grain of salt, but for me, everything changed for me over a decade ago when I started applying PBT techniques broadly and widely. I have found so many bugs that I wouldn't have otherwise found until production.
"Exhaustively covering the search space" or "hitting specific edge cases" is the wrong way to think about property tests, in my experience. I find them most valuable as insanity checks, i.e. they can verify that basic invariants hold under conditions even I wouldn't think of testing manually. I'd check for empty strings, short strings, long strings, strings without spaces, strings with spaces, strings with weird characters, etc. But I might not think of testing with a string that's only spaces. The generator will.
One of the founders of Antithesis gave a talk about this problem last week; diversity in test cases is definitely an issue they're trying to tackle. The example he gave was Spanner tests not filling its cache due to jittering near zero under random inputs. Not doing that appears to be a company goal.
Glad you enjoyed the talk! Making Bombadil able to take advantage of the intelligence in the Antithesis platform is definitely a goal, but we wanted to get a great open source tool into peoples’ hands ASAP first.
Its independence war for Israel everyday since its 1948.
It is not an independence war, it is colonialization and occupation. And the Zionists at the time knew that and used this terminology themselves. 1948 was not an independence war, it was the preliminary culmination of the attempt to occupy and annex Palestine into a Jewish state against the resistance of the Palestinians. And the residence against this illegal act has continued to this day.
[1] https://images.blackmagicdesign.com/images/products/davincir...
reply