“March 24, 2017, was no ordinary day for Proxima Cen,” said Meredith MacGregor [...]
Nitpicking, but the not so ordinary day for Proxima Centauri was two or three days before New Year's Eve 2012 due to its distance. Maybe just some fireworks set off prematurely.
Practically anyone doing real astronomy will use a well-defined continuous Earth-centric (or solar-system barycentric) timescale for a variety of reasons including reproducibility within the solar system (after all, astronomers on Earth only have other Earth-bound/near-Earth astronomers to communicate with at present) and the uncertainties in (spacetime) position of astronomical objects dwarfing the precision of local timescales (e.g. TT or TAI). Describing observables at one timestamp (a form of gauge fixing) is convenient, but pushes the uncertainty into the spatial slice thus fixed, and a suitable choice of coordinates can push most of that uncertainty into one of the three spacelike axes.
Since there is a gauge freedom -- meaning you can choose to slice up the spacetime position arbitrarily -- you would be perfectly free to use coordinates fixed on the observed object as you are doing, but that's incovenient for discussing the relationship between a distant object and a closer one, and because uncertainties in spacetime position for astronomical objects are commonplace and demand revision when new data provides a better estimate of position.
Concretely, you say "two or three days", capturing the uncertainty in distance to Prox. Cen. That's rather worse than the sub-second errors in the ALMA timestamp data (e.g. JD 2457836.8349, one of the timestamps in the preprint). Moreover, further observation of the alpha Cen. system will almost inevitably reduce uncertainties in position of Prox. Cen., which raises the question of whether one should go back and revise any timestamps like your "two or three days before New Year's Eve 2012" as better data enable better retrodiction of its position relative to Earth.
Fixing a Gregorian-like date on a set of spacetime coordinates originating on an event at a celestial object is a false de-parochialization; that is, it's no more general. If there are astronomers on Proxima b, it would be a stunning coincidence if their start-of-year aligned with our start-of-year on that orbit (and we don't know enough about the rotation of Proxima b to talk about their local length-of-day). Astronomers elsewhere in the galaxy (and in particular in our future) are unlikely to be helped by swapping observations-at-JD-xxxyyy for a best guess of local-to-event observables inferred by retrodiction from those same Earth-bound observations.
We don’t know exactly how far away the stars are, so when we convert earth time to star local time, we are being less accurate. Better to keep times stored local to earth, and let distances get more accurate as we learn more.
There isn't really any more precise timekeeping equipment available to us than atomic clocks here on (and near) Earth, so using those are expedient.
In principle we could measure the spectra of the relic fields, but with current technology we can't do that for the cosmic neutrino background, and while the cosmic microwave background (CMB) is an attractive option (we have lots of instruments measuring it in excruciatingly fine detail) its utility as a quasi-universal clock is based on picking out an ideal observer who sees no non-cosmological redshift (i.e., no contributions from proper motion or local gravitation), and such an observer isn't really physical. Converting local measurements of the CMB to an ideal observer's in a way which achieves precision approaching that of our terrestrial atomic timescales depends on understanding our own local (~ Mpc scale) environment better than we do today.
(Probably we will get there by using CMB observations to steer local oscillators, much as we do in atomic clocks, H2 masers, and so forth, and use those steered oscillators to predict CMB spectrum on the (well-supported) assumption that it is that of an ideal blackbody with a certain temperature).
However, we'll still record actual observations at local CMB temperature (perhaps recording what the "universal" CMB temperature would be if we were a "universal" observer). We are still stuck with the problem of position of the observed object, since we cannot directly observe the CMB temperature at an event like this flare.
In other words, the coordination (or if you like, localization) problem here isn't solved by switching to a different set of coordinates. Indeed, a different set of coordinates might make things worse (which was the thrust of what you replied to, and essentially what you write yourself).
Thanks, it's just that the self-described "nitpick" is one which I've seen here and there several times, and I was finally motivated to say something about it.
Describing an observed event seen at observer's time X and then recording it as happening at observer's time X minus several factors (the dominant one being light-travel-time) is at best complicated and likely loses accuracy as well.
Making something needlessly less simple and less accurate is kinda the opposite of clever, but for various reasons relatively clever people keep doing exactly that.
(You can get very far down the rathole of how to do precise timings of events generally, and astronomers do just that with e.g. barycentric julian dates. One might ask a computer-engineering question like: suppose one is watching timestamped logs. What does one want to see exactly? The time the logger commits the log message? The time the event generating the log entry occurred (which may require adjusting for network conditions etc)? Reversing that, does one want a clock display program to adjust for conditions between the program and the retina or even primary visual cortex of the user? And what if there are multiple users looking at such a clock?) Time is hard. Let the UTC/TAI/TT/IERS people sort it out, and use their results.)
We don't. At the end of the last ice age there was a strange period we call the Younger Dryas, during which we had a major extinction event and drastic temperature swings. There are several theories about the cause, including a massive solar event. I did a quick search and found the article below about it:
You know that crazy prepper neighbor who spent last fall putting a shelter in his backyard? Yeah, he'll be fine.
For the old folks like me - there's roughly a 1:100 conversion between sieverts and rem. So an anticipated 3-6 sieverts over 2 days is 300-600 rem, which is fatal to nearly everyone.
But the radiation mostly comes in form of electrons, protons or nucleii of heavier elements, not neutrons or gamma/xray. A really thin sheet of metal would reliably stop that.
I think the effect of the direct radiation would indeed be blocked by the typical roof and walls. The part where the prepper has an advantage is dealing with the indirect effects -- such bombardment would (temporarily) collapse the geomagnetic field and fill the uppermost reaches of atmosphere with dust. That would cause massive effects on the climate.
Famine would be the other problem the prepped would have an advantage in. One would expect the deaths of many farm animals, and it would take a few years to recover production.
In my opinion, it would be difficult for the US to enter a widespread (5% population starve to death) famine.
The amount of dried, preserved, high calorie, long shelf life food that we have staggering. Crackers, soups, noodles, pasta, dehydrated soy/meat etc.
The only two reasons someone could starve to death is if companies deliberately hid supply of food for fun, or if they refused to eat those types of food for about 60 days, upon which most people die from acute starvation.
Death of farm animals would mean that the total available food supply goes up, not down. Depending on exactly how you count, it takes between 2 and 20 pounds of corn to produce one pound of beef.
In terms of total global food production, meat production is a net drain, that basically consumes food to produce a luxury.
The passing of exoplanets (planets outside or solar system) if front of the host star is not the only way to detect the planets. The most common method used to be the Radial Velocity method.
Essentially, planets and stars orbit a common center of mass (which often lies witching the star’s radius). This effect essentially wobbles the parent star. Because we know the constituents of the star and it’s spectrum, this wobble causes a period red and blue shift (the Doppler effect). From the magnitude of this effect you can describe a lot of the planet’s orbit, but not everything. For example: you can describe the minimum mass of the planet but not the actual mass.
I’m not sure if the have used the radial velocity method of this star, but assume they would have.
You can measure the position of the star in the sky and use that to detect systems that are "face on" to us (with the axis pointing at us) but it's harder.
Sounds like 'harder' is an understatement - according to that article no planets have been discovered with this technique, and it is unlikely to yield any discoveries any time soon. (Technique is to directly observe the star wobbling in space rather than measure doppler shifts.)
Planetary systems are more rare around binary systems and planets are smaller too (making them harder to detect). Proxima Centauri is the smallest star in a triple system.
In the abstract, it says that they conclude this from the absent of "quiescent excess emission". I do not know what this exactly means, could not find an explaination for it, but maybe it is a technical term for "light echo" or something related to it. If a star suddenly becomes 1000x brighter, one would expect that after some time, planets and dust would also light up after the star has returned to normal brightness.
https://en.wikipedia.org/wiki/Light_echo
The web page links to a journal article that you have to pay for, but you can find the freely-readable version of the full article at https://arxiv.org/abs/1802.08257 . Section 4.2 discusses dust emission. The original paper they're correcting is at https://arxiv.org/abs/1711.00578 (section 3.2). This paper concludes that the previous paper had wrong assumptions because a flare happened during their measurement, so they didn't need to hypothesize that there is a dust belt (at least the 2 inner ones; they didn't have info about the far outer belt) to account for the supposed "excess emission".
That would depend on Prox Cen b's atmosphere and magnetic field, and we don't know much about either.
At the top of its atmosphere (if it has one at all; otherwise, at the surface) Prox Cen b would get about 500x more EUV/X-Ray radiation in an average day than we get at the top of Earth's atmosphere, so putting Earth into an orbit whose periastron is similar to Prox Cen B's average orbital distance would not be good news for many of Earth's near-surface organisms.
The planet is in such a tight orbit around Prox Cen that it may not have any atmosphere thanks to erosion by the stellar wind. However, it might have a sufficiently strong magnetic field that erosive loss is checked. It also might not be a rocky planet rather than something like a small Neptune, with a deep atmosphere. Finally, Venus's magnetic field is negligible, and Venus is rocky, yet it has a thick atmosphere; Prox Cen b could be like a big Venus, for all we know so far.
Nitpicking, but the not so ordinary day for Proxima Centauri was two or three days before New Year's Eve 2012 due to its distance. Maybe just some fireworks set off prematurely.