Wendy Grossman has a nice essay, The art of the impossible (that I found since she quotes me) about how to tell plausible fantasy from wild possibility.
I think that is a very interesting and tricky problem. We need to do it every day (especially if one is moving in future-oriented circles), but doing it well is hard.
I think the key is to have a sequence of attention filters. Just like most spam, most bad ideas can easily be detected by just looking at keywords or form. Somebody claiming a fundamental insight in physics who does not use a single equation or invents numerous new terms is very unlikely to be worth taking seriously. See the sci.physics crackpot index for a not entirely unworkable formalisation. In the Q&A part of a lecture, you can almost always tell who is going to ask the flaky questions (or rather, do their own rambling mini-lecture disguised as a "question") when they open their mouth - there is a lot of tone of voice, vocabulary and style that signals what is going to follow. On the upside, this kind of filtering is fast and takes minimal effort. On the downside, if you get the wrong patterns into your mental spam filter you will automatically dismiss categories of messages that are actually worth paying a bit more attention to.
There are some methods of deeper filtering that are popular but unreliable. Checking whether the source is a respected person might give help, but plenty of people say stupid things not just outside their specialities (where they are merely smart laymen) but even in them. Just consider the large number of things ascribed to Einstein. Looking for biasing interests might find reasons to suspect funding bias, self-interest or a tendency to self-deception, but could just as easily be an expression of our bias. We can also check whether a statement is likely to be a self-propagating meme spreading well because its coolness, emotional impact or other "dramatic" properties: if it is, we have a reason to think it is not necessarily truth-tracking. If it spreads because it fits evidence, then it has a better chance of being right (but availability bias can still make a false idea spread effectively under the right conditions).
The deeper filtering suggested by Merkle involves checking against known science (if it breaks stuff in elementary science it is very likely wrong) and checking it for internal consistency. This takes a lot more effort.
H.G. Frankfurt made a good point in that people who are talking bullshit are uninterested in truth: liars and truthful people pay attention to it, but bullshitters care more about the form and appearance of what is said than any real content. So detecting when somebody is bullshitting (it is always a process) is a good way of checking the value of the ideas. Calling bullshit sometimes work: ask the source for details, check whether they actually understand what they say and if they can elaborate the ideas. But delusional and lying people are pretty good at answering challenges. Also, calling bullshit can also be a social strategy to appear critical or clever rather than a way of elucidating the value of ideas.
Wendy arrives at the conclusion that the real thing to look for is deliverables. That may be useful even when the real deliverables (AI, nanoassemblers, life extension) remains in the future: most real work produce deliverables of other kinds before it achieves success. There is a difference between the inventor slaving away in his workshop and the one talking on the internet about the importance of his work. A nanoassembler project will produce papers and prototypes that give evidence of progress (again, these should ideally be scrutinized for actual content, of course). The SENS program is sponsoring real research in particular areas that follow logically from its aims. Meanwhile most purveyors of herbal remedies seem to be very uninterested in actually testing their products strictly.
Overall, the effort that should go into checking whether something is worth taking seriously should be proportional to our prior estimate that it actually is true, the importance it would have if true, and how much effort we can save by investigating it.