The examples you gave are in m/d/y format though, and DATEVALUE() parses your examples correctly into Jan 15th, 2024 and January 3rd, 2024.
DATEVALUE() parses ambiguous short date formats (e.g. 1/3/24) using the short date format specified in the Region settings of Windows Control Panel. So if you want to parse d/m/y format, you can try changing the settings there.
My takeaway is that the volume of points which get worse as they are pulled towards point P exists in some region R. As the number of dimensions increase, region R's volume shrinks as a % of the total cloud volume, making it much more unlikely that a sample is pulled from that region. In other words, you are more likely to sample points which move closer to the center than move away, which is why the estimator is an improvement on average.
There seems to be a lot of discussion about stock buybacks vs. dividends. Note that Apple also announced an increase of their dividend from $0.24/share to $0.25/share[0], so they are both repurchasing stock and paying out more in dividends.
Also, the Inflation Reduction Act introduced a 1% tax on stock buybacks starting in 2023[1], so there is still in some sense a tax impact on the remaining shareholders.
Interesting. But Apple can easily get around that tax by using one of its foreign companies, one of its holding companies, or one of its foreign brokerage accounts to buy back the U.S. stock from overseas. That non-regressive tax is only for weaker medium sized companies that don’t have any international incorporations or companies that want to pay more taxes out of charity.
It’s very sad that the Inflation Reduction Act was nothing more than tax increases when it should’ve been solely cuts to wasteful government spending instead. Instead of cutting spending, they did the opposite and increased spending.
>The Affordable Care Act introduced the newer perverse incentive of insurance companies tolerating higher prices. Because if you can only pay your execs bonuses based on 20% of your total spend, insurance companies can pay execs more if the underlying claims they're paying cost more. 20% of a $1,000 MRI isn't as juicy to your C levels as 20% of a $10,000 MRI.
If this perverse incentive were true, then you would expect the medical care costs to rapidly increase after the passage of ACA, but medical trend has actually slowed down since then[0] [1].
You can also look at the total dollar spend on hospital and medical expenses for the entire health insurance industry [2]. After the passage of ACA, there was a large increase in medical spend as more people obtained insurance coverage, but the increase in medical expenses year over year has settled back into the 5% range, which doesn't seem that perverse.
Having no idea what the real trend is, couldn't any other factors counteract the perverse effect, making both parent's and your observations true at the same time ?
For instance rhe sustained efforts that lead to ACA peobably didn't stop there, and more effort surely were made to reduce healthcare cost from there. If ACA could be passed in the first place, it wouldn't be surprising if other effective actions were also passed as well ?
>Having no idea what the real trend is, couldn't any other factors counteract the perverse effect, making both parent's and your observations true at the same time ?
Absolutely. There could be other factors that are counteracting the effect, including those within the ACA itself since the bill contains many changes to healthcare, not just the restriction on medical loss ratios.
From your second link, except in the last year, healthcare spend has been outpacing inflation by more than 2 to 1. The economics of scale are backwards in healthcare, I find that perverse.
I think we can agree that something will need to change as medical trend cannot outpace inflation forever. I don't think the issue is with executive compensation, because even if insurance companies magically could perform all their services for free, you would only reduce premiums by less than 20%.
> I recently just renewed my errors and omissions policy and was a little shocked to find out if you let the policy expire then the policy does not protect you even in past while the policy was active.
This is to protect the insurer against adverse selection. Errors and omissions policies are typically on a "claims-made" basis, which means they cover claims that were filed against you during the policy period, as opposed to incidents occurring during the policy period.
It is assumed that if you let your policy lapse and then buy a new policy, that you know that there is some incoming claim against you for which you need protection from.
I don't have a ton of experience in E&O, but I do work in insurance and I don't think this is right. Buying a policy to file a claim for an event that already happened is called fraud, and while it does happen, it's generally pretty easy for insurers to detect and prevent.
In general the advantage of claims-made policies from an insurer's perspective is that they don't have to hold incurred but not reported (IBNR) reserves after the policy term. With an occurrence-basis policy on a relatively long-tailed line like E&O, the insurer generally needs to hold IBNR for several years after the end of the policy term. It's expensive to lock up capital for that long - as the insured you effectively pay for that capital (through higher premiums) at the insurer's cost of equity, generally upwards of 10% annualized for stock insurers.
Obviously the insurer's tail risk is also lower for a claims-made policy, but that's less of a factor at scale since it's diversifiable. (Or at least mostly diversifiable - there might be some contagion risk.)
You are correct, those are some of the advantages of claims-made vs. occurrence policies. Claims-made policies started because the industry went through a crisis in the 60's and 70's.
>Buying a policy to file a claim for an event that already happened is called fraud, and while it does happen, it's generally pretty easy for insurers to detect and prevent.
Detection has underwriting costs associated with it. Investigating whether you knowingly purchased an insurance policy with an incoming claim is not something that is automated. By default a claims made policy will prevent the issue in the first place by resetting what is known as the retroactive date. This causes the lapse in coverage the OP was referring to.
There are specialty insurers who will allow you to explain extenuating circumstances, and/or set retroactive dates prior to the effective date of the first claims-made policy. You obviously pay for the additional risk this poses to the insurer though.
But this isn't how other insurance policies work, at least in the US. Let's say I destroy someone's property today using my truck. Something simple, like I drive across their front yard and break their water line. I leave the scene and just go home.
6 months later, an investigation is conducted and they find surveillance footage of me driving in and out of the neighborhood and come to the conclusion that I broke the water line. A claim is made against my insurance. At that point, it doesn't matter if the policy is still active or even if I'm dead. The liability insurance I have will cover the cost of the damages provided someone gathers enough evidence and presents it.
> But this isn't how other insurance policies work, at least in the US.
How an insurance policy works depends on the terms of the policy. No more, no less.
> An occurrence policy will cover claims related to activities or events that occurred while your policy was in effect. Even if your policy expired or you canceled it, the claim would be covered if the event happened during the policy period. […]
> When you buy a claims-made policy, you will be covered if both the event and the claim arise while the policy is active and are reported during that time period. If you do not add the expired term to the subsequent policy period, you will lose coverage for any previously unknown claim that took place during the prior policy cycle. […]
> An extended reporting period (ERP) is a feature you can add to your claims-made professional liability insurance policy. It allows you to report claims even after your policy expires. This policy endorsement is also known as tail coverage.
> The liability insurance I have will cover the cost of the damages provided someone gathers enough evidence and presents it.
Does it? Have you read the terms of your policy? Or are you just assuming it does? Even if you're right, it could be that, since vehicle insurance is mandated in most jurisdictions for public roads, certain clauses are also mandated as well. Whereas other types of insurance, which are not government-mandated, it is the responsibility of the purchaser to actually get check the contract they're signing.
I understand the potential reasons, but still if I had an active policy in 2020 but let it expire in 2021 and something happened in 2020 while I had coverage I am not covered. Feels wrong since I paid my premiums for 2020.
I think part of the problem is that any scripting language rolled directly into Excel will be expected to keep backwards compatibility. Microsoft doesn't have full control over Python/JS, and they would like to avoid issues such as the changeover from Python 2 to 3.
Microsoft could implement its own fork of those languages, but is that what customers actually want?
>Pandas is a very popular tool for data analysis. It comes built-in with many useful features, it's battle tested and widely accepted. However, pandas is not always the best tool for the job.
SQL is very useful, but there are some data manipulations which are much easier to perform in pandas/dplyr/data.table than in SQL. For example, the article discusses how to perform a pivot table, which takes data in a "long" format, and makes it "wider".
>SELECT
role,
SUM(CASE department WHEN 'R&D' THEN 1 ELSE 0 END) as "R&D",
SUM(CASE department WHEN 'Sales' THEN 1 ELSE 0 END) as "Sales"
FROM
emp
GROUP BY
role;
Not only does the SQL code require you to know up front how many distinct columns you are creating, it requires you to write a line out for each new column. This is okay in simple cases, but is untenable when you are pivoting on a column with hundreds or more distinct values, such as dates or zip codes.
There are some SQL dialects which provide pivot functions like in pandas, but they are not universal.
There are other examples in the article where the SQL code is much longer and less flexible, such as binning, where the bins are hardcoded into the query.
I've been doing a lot of data analysis in Pandas recently. I started off thinking that for efficiency's sake, I should do as much initial processing in the DB as possible, and use Pandas just for the higher level functions that were difficult to do in SQL.
But after some trial and error, I find it much faster to pull relatively large, unprocessed datasets and do everything in Pandas on the local client. Faster both in total analysis time, and faster in DB cycles.
It seems like a couple of simple "select * from cars" and "select * from drivers where age < 30", and doing all the joining, filtering, and summarizing on my machine, is often less burdensome on the db than doing it up-front in SQL.
Of course, this can change depending on the specific dataset, how big it is, how you're indexed, and all that jazz. Just wanted to mention how my initial intuition was misguided.
I've always been disappointed by the SQL pivot. It's hardly useful for me if I have to know up-front all of the columns it's going to pivot out into. The solution would be to use another SQL query to generate a dynamic SQL query, but at that point I would rather just use Pandas
Agreed. https://ibis-project.org/ and https://dbplyr.tidyverse.org/ can compile dataframe-like input to SQL, which might bridge the gap in tooling (although there still are small differences to the pure dataframe syntax)
It is not much better than the canonical example given in the article. It still has the following usability issues:
-You still need to enumerate and label each new column and their types. This particular problem is fixed by crosstabN().
-You need to know upfront how many columns are created before performing the pivot. In the context of data analysis, this is often dynamic or unknown.
-The input to the function is not a dataframe, but a text string that generates the pre-pivot results. This means your analysis up to this point needs to be converted into a string. Not only does this disrupt the flow of an analysis, you also have to worry about escape characters in your string.
-It is not standard across SQL dialects. This function is specific to Postgres, and other dialects have their own version of this function with their own limitations.
The article contains several examples like this where SQL is much more verbose and brittle than the equivalent pandas code.
That's one of the non-standard ways to do it. MSSQL and Oracle also have a pivot function to do this. Unfortunately there is no standard way to do this.
a pattern that i converged on --- at least in postgres --- is to aggregate your data into json objects and then go from there. you don't need to know how many attributes (columns) should be in the result of your pivot. you can also do this in reverse (pivot from wide to long) with the same technique.
so for example if you have the schema `(obj_id, key, value)` in a long-formatted table, where an `obj_id` will have data spanning multiple rows, then you can issue a query like
```
SELECT obj_id, jsonb_object_agg(key, value) FROM table GROUP BY obj_id;
```
up to actual syntax...it's been awhile since i've had to do a task requiring this, so details are fuzzy but pattern's there.
so each row in your query result would look like a json document: `(obj_id, `{"key1": "value", "key2": "value", ...})`
>E.g. looking at early 3d games, they just feel dated. One cannot help but compare them to today's 3d graphics and of course the oldies fall short by a very far margin. Comparing pixel art games to a modern AAA 3D game title is just so obviously apples to oranges that there is no struggle due to loss of fidelity.
>Again, perhaps my view is biased because I grew up with the pixelated games of old. Still, I'd like to assign the excitement of the early 3d titles more to the novelty (at the time) than from the fidelity of the graphics. I think the pixel art style will (or already does) have much more staying power.
I happen to agree that early 3d games like PS1/N64 look rather ugly, but nostalgia is very powerful, and retro low poly models is now its own aesthetic [0]. Like with pixel art, artists will take liberties with what was actually capable with hardware at the time, but people who grew up with the style will probably appreciate it more.
Healthcare in the United States is complicated with lots of different players. The article seems to hint at a key issue they faced:
>But insiders claim that it will allow the founding companies to implement ideas from the project on their own, tailoring them to the specific needs of their employees, who are mostly concentrated in different cities.
Medical services is mostly a localized service, so each city is its own individual market. As large as Amazon, JPMorgan, and Berkshire are, their employees are spread out geographically, and even in their headquarter cities, they do not represent a significant number of people negotiating for medical services. Without a way to control the actual medical service supply (i.e. doctors and hospitals), their programs will have to negotiate at market rates, meaning there really is no way to achieve significant savings.
Amazon released their pill pack service, which helps because drugs can be shipped across the country, so there are efficiencies to be gained there. However, drug costs are still a smaller percentage of overall costs [0], albeit growing.
I wonder what the costs/benefits of tele-medicine are like right now. It seems like for certain services like gp visits you could just have a location that scans people to create high detail models for a remote doctor to review during a live call while locally situated support staff handle things like collecting blood, fluids, etc. and feeling for lumps. Probably not ideal for a small practice but if you have a pool of 100,000 employees it seems plausible that you could realize some savings.
I priced Amazon's prescription drug service and it estimated I would be paying my co-pay on each of the drugs. Sounds good if you usually go to CVS but at Walmart I get the four prescriptions I get for $26-30.
The examples you gave are in m/d/y format though, and DATEVALUE() parses your examples correctly into Jan 15th, 2024 and January 3rd, 2024.
DATEVALUE() parses ambiguous short date formats (e.g. 1/3/24) using the short date format specified in the Region settings of Windows Control Panel. So if you want to parse d/m/y format, you can try changing the settings there.