They're also one and the same generally-- at least if the stalker has money or the right friends most kinds of law enforcement access means stalker access. It's not unheard of for an officer themselves to be the stalker, and there are so many people that work in law enforcement that bribing, impersonating, or persuading your way to access is not that big a deal. Not to mention that enabled stalkers can just file a federal lawsuit and issue subpoena for records.
The only safe thing is for the records to never exist in the first place.
> It's not unheard of for an officer themselves to be the stalker
This was one of the motivations for passage of the Driver's Privacy Protection Act of 1994. Nowadays, officers need a legitimate reason to run a plate - unless the patrol car is fitted with automatic cameras[1] that look up every plate of every car they drive past.
> The Virginia state police used license plate readers to track people’s attendance at political events;
> The New York Police Department used license plate readers to keep track of who visited certain places of worship, and how often;
> Despite all this surveillance, ALPR technology has been repeatedly shown to be unreliable; like other police technologies, ALPRs can and do make mistakes.[2]
Generally, court decisions have held that you have zero expectation of privacy when you are in public spaces. Current license plate standards[3] aim for plates that are not cluttered and are easily read by the human eyeball, despite being wrapped with license plate frames (which usually make the state hard/impossible to read which is the most common failure mode for ANLR[4]). If the reflectivity material (traditionally called "ScotchLite"[5]) is worn out (or defaced), most states require the plate to be replaced.
How is that achievable? PIs can legally do it. Random people can keep tabs on you and exchange gossip. It's the sudden scale and low cost that doesn't sit well with freedom to not be tracked in public 24/7 we took for granted.
The core ill is aggregated data, because that's what allows the mass in surveillance, data mining, etc.
The collection actions are almost immaterial. Without persistence they must be re-performed for each request, which naturally provides a throughput bottleneck and makes "for everyone" untenable.
If we agree the aggregated data at rest is the problem, then addressing it would look like this:
1. Classify all data holders at scale into a regulated group
2. Apply initial regulations
- To respond to queries for copies of personal data held
- To update data or be liable in court for failing to do so
- To validate counterparties apply basic security due diligence before transferring data (or the transferer also faces liability)
- To maintain a *full* chain of custody of data (from originator through every intermediate party to holder) so that leaks / misuse can be traced
- To file yearly update on the types, amount of data, and counterparties it was transferred to with the federal government that are made public
The initial impediment to regulatory action is Google, Meta, Equifax, etc. saying "This problem is too complex and you don't understand it."
It's not. But the first step is classifying and documenting the problem.
It is not realistic to say that no person is allowed to keep track of another person; watch where they go, when, with who, etc.
It should not be acceptable for a company to gather information on "everyone"; where they have been going, when, with who, how often, etc. And it should not be acceptable for them to sell that information (to government agencies OR private citizens).
It's a matter of scale.
- Making the first one illegal/impossible would be difficult/costly; and not doing so has a limited impact (to society, not to the single person affected).
- Making the second one illegal is much easier, and it's much easier to shut down a large company doing it than it is 1,000 individual stalkers. The impact of making it illegal is much wider and better for society as a whole.
We don't want anyone being stalked. But in a cost/benefit analysis, we can do something about one of them but not the other.
The only way is through - everybody should get into the practice of stalking and gossiping about each other in a Molochian environment, where the people who do not do so suffer from the losing side of an information asymmetry.
Expect AI, especially post-Mythos, to just enable this at even further scale. Consumer grade wireless networking gear as a whole is a very wide attack surface and is basically never updated.
If PIs can "legally" do it then it sounds like there is a law which allows them to do it. That law can be revoked (unless the power comes from Constitution which would make it effectively impossible to revoke).
Note that PIs are effectively illegal under GDPR by default. They would generally need to provide Article 13 notice, i.e. you would become aware of them unless they were just asking around without actually following you. Member states can make them legal though (via Article 23) and likely in many cases they have done so.
In the US, PI licensing is only about PIing for hire. The actual act of going through public records, following cars and whatnot do not require a license, you can spy on anyone without a license as long as you don't get paid for it.
EU is more complicated, but Article 14.5.b allows withholding notice if it would impair/defeat the purpose of processing. The PI must however apply "safeguards", whatever it could mean.
Article 14(5)(b) does, but that only applies for Article 14 notice (personal data not directly obtained from data subject). Article 13 (personal data obtained directly from data subject) does not have such exception in GDPR itself.
This becomes extremely relevant when you read it in the light of the C-422/24 decision. In that personal data collected via body worn cameras was determined to be "directly obtained". Paragraph 41 from the judgement:
> If it were accepted that Article 14 of the GDPR applies where personal data are collected by means of a body camera, the data subject would not receive any information at the time of collection, even though he or she is the source of those data, which would allow the controller not to provide information to that data subject immediately. Therefore, such an interpretation would carry the risk of the collection of personal data escaping the knowledge of the data subject and giving rise to hidden surveillance practices. Such a consequence would be incompatible with the objective, referred to in the preceding paragraph, of ensuring a high level of protection of the fundamental rights and freedoms of natural persons.
Given this it's very unlikely that PI observing (especially if they record) could be considered to be Article 14 instead of Article 13 type of collection as it's exactly "hidden surveillance practice" that the Court warned about.
Member states do have a right to restrict the Article 13 disclosure obligations via Article 23 restriction, but that requires specific law in the member state & the law itself must fulfill the obligations that Article 23 requires. Article 23(2) essentially forbids leaving everything up to the controller.
And as far as PI in the US goes, actions between stalking and PI "for self" tend to be so similar that I wouldn't necessarily recommend anyone to try it.
I unsubscribe, and immediately set up a filter to mark any email from their (sub)domain as spam. Too many sites keep spamming for a week or two after unsubscribing, that behavior deserves a reputation drop.
I don't mind if a company sends me emails if I gave them my email address. As long as, when I click "unsubscribe" to the email, they stop. I don't want to have to go log back into their system and unsubscribe. I just want to click the unsubscribe button and have it be done - forever, not just until they add a new category for email.
I have a fair number of companies that send me emails (because I signed up for their service) on a "slow" basis (ie, when they have something interesting.. not just "every week, so you don't forget us). I don't mind those. Sometimes I read them, sometimes I don't. I don't unsubscribe and I don't mark them as spam.
I'm not saying you should be the same as me. I _am_ saying that, just because _you_ don't like it, doesn't make them "clearly in the wrong". Because there are people that feel like the way they are acting is reasonable.
FYI, requiring logging in to unsubscribe is a violation of the CAN-SPAM Act in the U.S., I just mark those as spam if they don't allow one-click unsubscribes.
> There are unsubscribe buttons with laws that enforce that they work.
They don't. Period. Full Stop. There are tons of companies that I have told to stop sending me emails that just... continue to do so. And some that won't _allow_ me to tell them to stop (I need to create an account to tell them not to email me... but they shouldn't be emailing me if I don't have an account).
But that same exact logic applies to "it's really hard to succeed, so I'm going to just mug some people to get the money I need". I'm sorry, but "its hard to succeed, so I'm justified in being unethical" is _not_ a valid excuse.
> They send "transactional" emails every month that can't be opted out of when they notice changes in my credit file
And you can't even try to unsubscribe without creating an account. And, if I don't _have_ an account, it is (pretty much by definition) NOT transactional.
Yes. 100%. And the fact that you're not seeing why it does is confounding to me.
This person has shown that they are willing to harm society (for their own benefit, presumably); by active choice. And, as such, anything they say needs to be viewed through the lens of "is this person lying for their own benefit".
1. Their previous actions do mean that we should not trust what they are saying outright, we should do (more) work verifying the information they provide.
2. Their previous actions to _not_ mean we should avoid holding other accountable when the information provided turns out to be true.
You're asking your question like someone is arguing that this person's information doesn't matter (2); but the point being made is that we should (1).
The fact that someone actively worked against the welfare of society as a whole, in significant and impactful ways, _is_ a criticism of their credibility. It speaks to their morals and empathy for others.
It doesn't mean that what they're saying is a lie, but it puts them firmly in the bucket where what they say needs to be verified.
The message is that they're bad and the fact that they did these bad things proves they're bad.
And the key thing here is that we need to decide if we believe "they did these bad things". If the person reporting them is well known as someone the is truthful and trustworthy, we're likely to believe them with little proof. If the person reporting them is well known as a bad person that does things to harm others for their own benefit... we're less likely to believe them until we can verify the truth of their statements.
You're completely skipping over the "is this person telling the truth" part; I assume because they're saying things that fit in with your pre-existing view of the world. And that's not a good thing.
> But once you’re dealing with multiple users (tens or hundreds) it’s a different problem. How confident are you writing auth and password reset flows? How sure are you that the AI got it right? How solid is your approach to roles and permissions? Are you implementing 2FA? Supporting drafts, scheduled publishing, editorial workflows? Now you are also tech support writing the infrastructure as issues come in.
And that's only the start of where it gets complicated
- Ingesting data from 3rd party systems
- Translating content to other languages
- Front end user auth and preferences
- Personalized content
- A/B testing
- Multiple sites in the same CMS, sharing the same content
The list of things that add on to make a cms (and the sites it is used to create) more complicated is enormous.
I built a production forum from scratch with thousands of real users.
For years I thought of doing it. Can’t be that hard. You can imagine how every component would work. You just need a few tables, right?
But it turns out a polished forum that people want to spend time on has infinite polish. Every feature explodes into a fractal of micro polish. You could spend your whole life improving it and handling rough edges and making it nicer to use.
The WYSIWYG editor being a good example. You could work on just that full-time and never run out of things to do. Or the daylight between a MVP notification system and a mature one that sends PM/email notifs, tracks high water marks, lets users mute certain threads, infinite polish.
I also thought about this but decided to go with Simple Machines Forum and I'm glad I did. Just looking at the dearth of options in the admin area is enough to make my head spin.
That being said, I probably will embark on a custom form just because I'm highly opinionated and capable.
I suppose building it from scratch means you could release the source and then charge for customizations or for push demands. Would you even consider doing that?
Isn’t this true for any project ever attempted? The only reason this project exists is because millions have already been wasted on trying to do this in house
Yes, but the comment I was replying to sounded like it was saying that the large cost wasn't an issue, because it would have such a big impact. But the odds of it actually accomplishing anything useful need to be taken into account, too. If it has a low chance of success, then a large price tag isn't worth it.
reply