Paul Derouda wrote:Bob, about the longitude problem: The reason they were trying to chart the movements of the moon or the satellites of Jupiter was precisely that they didn't have any reliable watch set to standard time, as any watch before John Harrison's chronometre was next to useless on the sea for this purpose. They thought they could observe the moon or Jupiter's satellites, compare their observations to pre-made charts and establish standard time like that, and this standard time compared to the local solar time would permit to establish longitude. Just imagine how difficult it would have been, peeking at Jupiter's moons and trying to make out their relative positions on some shaky little ship!
Well, sea-borne observations are very difficult, yes. As for the rest, yes, I think that's what I said, or tried to at any rate. Regardless, the system still required sea-borne observations but now, for the first time, they had a reliable base-line to compare them with. So I am in full agreement with you on this. Absolutely.
Different issue, eclipses, nothing to do with the above: Might as well get this all done in one post:As for the eclipse of -0423 (=424 BCE): Here's how it works. Astronomy uses a magnitude scale derived from work originally performed by an ancient Greek, Hipparkhos. Hipparkhos started with the brightest stars in the sky, which he called "first magnitude" and then went on down through "second", "third", etc., until he got to "sixth" which was as faint as he could see. OK. So we have a difference of five magnitudes between the bright ones and the faint ones. Well enough.
There is where matters stood until the 19th century when technology allowed astronomers to actually measure, as opposed to estimate, stellar brightnesses. Taking representative first and sixth magnitude stars, they discovered the sixth magnitude stars are actually about 100 time fainter than first magnitude stars. In other words, the human (and presumably this is true for other animals as well) visual system is skewed to prevent the bright objects from swamping the faint ones. Fainter objects appear much brighter to us in relation to brighter objects than they really are. Further investigation revealed that representative stars of any given magnitude were about 2.5 times brighter or fainter than stars of neighboring magnitudes. So the astronomers institutionalized it. They defined a difference of one magnitude as being a difference of the fifth root of 100 times in brightness (= approx. 2.5).
A rough idea of how this works, based on a first magnitude star, is:
first = 1 brightness
second = 1/2.5 brightness
third = 1/6 brightness
fourth = 1/16 brightness
fifth = 1/40 brightness
sixth = 1/100 brightness
For reference, all the stars in the Big Dipper are second magnitude stars except the one where the handle joins the bowl. That guy is a third magnitude star. Polaris is also a second magnitude star.
So, to get the actual difference in brightness between two objects, using the difference in magnitudes between them as an exponent you get 2.5 ^ difference (where "^" means use the following number as an exponent). A difference of five magnitudes is then 2.5 to the fifth power which is: 100.
Once the measured magnitude scale was defined, astronomers could talk about an extended range of magnitudes - and they did. Let's take some everyday examples. On this extended magnitude scale they placed the Sun and the Moon. The Sun is very nearly at -27 magnitude (very, very bright as the negative sign indicates). The Full Moon is about -13 (very, very bright). Those numbers look very close, don't they? 27/13 is just a bit more than 2. So the Sun isn't all that much brighter than the Full Moon, right?
Wrong.
It's 2.5 to the 14th power times brighter than the Full Moon. I'll do the math for you. That's about 400,000 times brighter. I kid you not. It is extremely bright.
Now a 70% obscured Sun, in the eclipse you're talking about, shines about 30% brighter than normal. In other words, it's only about 1/3 as bright as before. How does this compare to a Full Moon? You can do this one in your head. 1/3 of 400,000 is about 130,000. A 70% obscured Sun is about 130,000 times brighter than a Full Moon! Next time you see a Full Moon, imagine something about 130,000 times brighter and you have a Sun as it was visible from Athens during the eclipse of -0423. Now tell me about how noticeable the diminution of sunlight would be to you.
That's also an example of how skewed the human visual system really is. And it's an evolutionary plus, an advantage to the user. That's why things in nature happen that way.
The annular eclipse of 1994 that passed over central Illinois was, in Madison, more than a .9 magnitude eclipse (here's where I don't like the way astronomers use the term - it means something different here, it means diminution from full brightness). More than .9, not .7 as the eclipse of -0323 was in Athens. More than .9. And it didn't even stop traffic in Wisconsin, except for those who actually knew there was an eclipse taking place.
Madison is a lot closer to the center line of the 1994 eclipse than Athens was to the center line of the -0323 eclipse. A lot closer. Moral, you have to be very close to the center line. Not hundreds of miles away. In 1954 a total eclipse passed just north of Denmark (about the same as the -0323 eclipse) and then passed over Lithuania. This eclipse was observed in Austria - only by those who knew it was taking place. I know. I was there. Austria was a lot closer to the center line of this eclipse than Athens was to the eclipse of -0323. We had a .79 (not a .72, a .79) eclipse. Spectacular to people who write papers, yet nothing out of the ordinary was visible to the casual observer. Most of the people on Textkit have probably been in that position relative to a solar eclipse, yet wouldn't have noticed (even if they did) without being told about it. I would dearly love for that eclipse of -0323 to be the one Thoukydides was talking about, but I just can't vouch for it.