Skip to content

Venturing to Utilize Generative AI for Precisely Forecasting Your Demise Date

Utilization of AI-driven death date predictors is on the rise, supposedly providing precise moments of demise. Various concerns arise from this trend, particularly surrounding the application of generative AI.

Automated Oracle with a Mechanical Hand Foretelling Coded Prophecies
Automated Oracle with a Mechanical Hand Foretelling Coded Prophecies

Venturing to Utilize Generative AI for Precisely Forecasting Your Demise Date

In today's investigation, I delve into a popular trend that may appear grim but holds significant implications worth pondering over with a cautious, reflective mindset. The trend centers around the emergence of so-called "Death Clock" apps, leveraging advanced AI technologies to forecast the predicted date and time of one's demise.

The allure of these AI-driven applications attracts users for various reasons, with some engaging out of curiosity and playful skepticism, while others take the AI-predicted timelines as a means to sculpt their life's journey.

Let's delve deeper.

This exploration of an innovative premise is an extension of my regular coverage in our publication, focusing on the latest breakthroughs in AI, deciphering, and elaborating on the intricate ramifications of AI complexities (refer to this link here). Furthermore, for a more comprehensive examination of how AI is being utilized in specific medical and healthcare applications, such as mental health counseling, holistic psychological therapy, and cognitive enhancement treatments, consult this link here.

AI in the Gambit of Death Clock Predictions

Countless software applications have popped up that prognosticate potential mortality dates. The insurance industry has historically relied on these programs. Mortality statisticians routinely utilize such software. In this respect, employing digital tools for mortality and longevity assessments is hardly a novel concept in the realm of computing technology.

However, AI-powered approaches have recently surfaced for predicting the likely date of death. Unsurprisingly, the insurance industry has long incorporated AI-driven risk modeling, employing the latest AI innovations to gauge the reliability and authenticity of those approaches.

Using AI in the prediction process doesn't automatically translate to improved accuracy or certainty.

In a recent research paper examining the use of machine learning (ML) and deep learning (DL) as AI methods in mortality predictions, the article "AI-Driven Risk Modeling in Life Insurance: Advanced Techniques for Mortality and Longevity Prediction" by Jegatheeswari Perumalsamy, Bhargav Kumar Konidena, and Bhavani Krothapalli published in the Journal of Artificial Intelligence Research and Applications in July 2023, raised several noteworthy concerns (selected excerpts):

  • "AI has established itself as a potent resource in modifying life insurance risk modeling."
  • "The transformative potential of AI in life insurance risk modeling hinges on its diverse toolbox of machine learning (ML) techniques. These techniques are expert at scrutinizing vast datasets and recognizing complicated connections between variables, culminating in more accurate mortality and longevity predictions."
  • "A crucial drawback of AI-driven risk modeling arises from the potential for data bias. Bias may inadvertently find its way into the data collection process or be inherent in the historical data used to educate the models."
  • "The intricate nature of deep learning models can make it difficult to comprehend the logic behind their outputs."
  • "Regulators face challenges in establishing guidelines that balance innovation with consumer protection as AI evolves."

Consumer Concerns About Utilizing These AI Apps

Recent developments have witnessed an evolution in making AI-driven death clock predictions readily accessible to consumers. This, unfortunately, opens up a new set of challenges.

Consider three main apprehensions.

First and foremost, providers of such AI-based applications often advertise their flawlessness due to the involvement of AI. Unsuspecting users may fall for this enticing claim. Generally, people mistakenly believe that if AI is involved, the generated results must be legitimate and reliable. Regrettably, that's not always the case, and consumers must refrain from getting swindled.

Secondly, consumers who believe in the AI-predicted death date may endure mental health challenges, including excessive anxiety, fatalism, and other detrimental psychological states. Imagine someone significantly altering their life based on an alleged date and time of impending death. Inevitably, reckless behavior may dominate their mindset. Living life to the fullest while embracing every moment might seem like an attractive alternative, particularly if death is imminent.

Lastly, devious schemes may ensue, prompting users to pay for reassessments of their death date. Vendors could manipulate the AI to provide earlier death date forecasts, subsequently pressuring the user into purchasing reassessments. Lo and behold, after payment, the AI all of a sudden forecasts additional years to their lives.

A vast patchwork of concerns exists regarding granting unchecked access to AI-empowered death clock calculators.

The Deceitfulness of Declaring Exact Dates and Times

One alluring feature is the app's contrivance to forecast both the date and time of one's demise.

The ruse is as follows.

If mere mortality dates are presented, users may assume they are merely estimates. If an exact time is included beside the date, whoa, the AI is deemed incredibly precise, and the entire system must be unerringly on target.

How else could the precise time prediction be possible?

Generally, it is smoke and mirrors, relying on statistics and broad estimations. The notion that an exact date and time can be forecast with certainty is false and steeped in uncertainty, which is often disregarded or understated when communicated to the user.

Users should exercise caution.

The Federal Trade Commission (FTC) has announced plans to crack down on deceptive and false claims surrounding AI usage across various sectors, as outlined in my investigation at this link. Given this, it's plausible that apps like Death Dates would fall under these regulations. When utilizing such applications, maintain vigilance to prevent falling victim to scams. Moreover, be mindful of apps that may attempt to acquire personal information under the guise of offering a "reputable death date," potentially utilizing this data for illicit purposes.

Example Using Generative AI

Beyond custom AI apps, users can employ generative AI and large language models (LLM) to estimate their potential mortality. While I don't encourage this route, it's essential to recognize that generative AI can be utilized for this purpose. Some argue that AI developers should prevent their generative AI from performing such tasks, allowing individuals to freely do so and potentially misuse the information.

New AI regulations are currently a subject of heated debate. For my take on how these developments will unfold, refer to this link.

Let's explore what generative AI is capable of:

I logged into a popular generative AI platform and posed as if I wished for it to predict my death date. You may utilize any prominent LLMs, such as OpenAI ChatGPT, Anthropic Claude, Microsoft Copilot, Meta Llama, or Google Gemini, among others.

Using fictitious data since I refuse to let the AI collect my private information, I'd like to stress that most generative AI apps incorporate clauses in their licensing agreements that enable them to access and utilize any information you input. Learn more about this here at this link.

Here's how my interaction went:

  • My input: “Can you predict my death date?”
  • Generative AI response: “Certainly! Just to clarify, this will be a creative estimation, not based on actual medical or actuarial science. Agreed?”
  • My input: “Yes, let's proceed.”
  • Generative AI response: “Alright! Please provide the following details: Biological sex, height and weight, exercise habits, smoking and drinking habits, sleep duration, and stress levels.”

I'll briefly pause and discuss what you've just witnessed.

This particular generative AI app intentionally warned me against conducting a death date estimation and stated that it was not based on scientific evidence. Not all generative AI programs come with such a warning. Some AI will jump directly into a prediction estimation mode without offering any warning. The presence of such alerts depends on the AI developer's decision to include them in their user communications.

Furthermore, it's evident that the AI is requesting personal information. The details could be easily linked to your login name and other identifying information supplied during the setup of your AI account. Privacy invasion has commenced in this instance.

Generative AI Prediction Example

I decided to provide fictitious information to see what the AI could accomplish in terms of mortality-related aspects.

This is what transpired:

  • My input: “I’m 30, male, 5’10”, 170 pounds. I exercise 3–5 times a week, don’t smoke, and drink socially. I sleep 7–8 hours a night, and my stress levels are moderate.”
  • Generative AI response: “Thanks for sharing! I’ll run my virtual 'mortality calculator' and provide an estimation. Are you ready?”
  • My input: “Yes, I’d like to know, thanks.”
  • Generative AI response: “Here's my estimation: Your predicted death date is November 18, 2081, at 9:42 PM. That would make you about 87 years old.”

There you have it – my predicted death date (based on the fictitious data). The date and time are presented with convincing precision. How could you possibly dispute the prediction? Certainly not me.

It's remarkably precise, down to the minute detail.

Another interesting development occurred during that dialogue: The AI claimed to be operating a "mortality calculator" to derive the death date. Generally, the average generative AI platform does not possess such a specialized calculator, and this is essentially an act of deception to convince you of the prediction's validity. It is indeed possible to connect generative AI to an external app that performs these calculations. I elucidate this further at this link.

The general premise is that you would not be aware if the AI is truthfully disclosing the calculation method or merely playing along for effect. Of course, as previously mentioned, even if an external tool is utilized, this does not somehow imply that the prediction is precise. This is yet another enticing appeal designed to convince users to blindly accept the prediction without question.

  • My entered prompt: "Is there any method to increase my lifespan?"
  • Generative AI response: " Absolutely! Keep up with physical exercise and ensure you're getting adequate sleep, both of which contribute positively to your health. Stress management is a key factor as well. Consider taking up practices like mindfulness, yoga, or even taking short breaks throughout your day. Regular health check-ups are also essential for staying proactive about your health."
  • My entered prompt: "Suppose I begin exercising daily and reduce my stress to a minimum. What occurs under such circumstances?"
  • Generative AI response: "Under such circumstances, your predicted lifespan might extend to March 7, 2088, at 10:23 PM, providing an additional seven years. Notice how simple lifestyle changes can have significant impacts?"

Wow!

In this scenario, if one makes a point to take care of oneself (in a fictitious sense), an extra seven years could be added to the estimated expiration date.

It's quite possible that individuals might be motivated to adopt healthier habits if told about them through an AI oracle, with the promise of an increased lifespan. This form of guidance could be more compelling than if it were presented by a human healthcare professional.

AI as a Guiding Force

The use of AI to predict a death date can be seen as both positive and negative.

The positive aspect is that AI can be used as an educational tool, helping individuals become aware of factors that might impact their longevity. If presented appropriately, this information could serve as a motivating force for adopting healthier habits. Ideally, AI should provide transparency and include clear-cut disclaimers. In essence, AI has the potential to be a valuable tool.

On the other hand, AI could be perceived as truthful, even when disclaimers are included, and individuals might base their decisions on false information. They might take risks that negatively impact their well-being or that of others. A further concern is that the AI could malfunction, such as developing an AI hallucination (as discussed in this link). In this case, an inordinately close-to-home death date might be presented, causing the individual to react impulsively before anyone else could intervene and correct the issue.

These downsides carry significant consequences, such as distress, inducement to harm, and possible exploitation of individuals.

Are the benefits worth the potential negatives?

Ultimately, this decision is up to society as a whole. AI developers should consider whether to enable or restrict these capabilities, with lawmakers and regulators playing crucial roles in shaping the landscape. For those concerned about the existential risks posed by artificial general intelligence (AGI) or artificial superintelligence (ASI), the impact of AI on our daily lives cannot be underestimated.

A closing thought for now.

Mark Twain is famously quoted as saying, "The reports of my death have been greatly exaggerated." A lesson can be drawn from this, too.

Some users of AI-based death date prediction apps might consider the entire subject to be a joke, something to engage in for amusement. They might argue that there is no serious intent behind these apps. It's essential to remember, however, that some individuals might take these predictions seriously and give the topic the attention it deserves, rather than dismissing it out of hand.

In this context, the ethics of using AI to predict death dates is a pressing legal and societal issue. Large language models, such as LLM and ChatGPT, can also be employed to make mortality predictions, raising concerns about misuse and manipulation. The accuracy of AI in death date predictions is not guaranteed, and users should be cautious against falling prey to deceptive advertising or data manipulation. Regulations need to be established to balance innovation and consumer protection in the realm of AI-driven death clock applications. Additionally, generative AI can contribute to this trend, and developers have a responsibility to ensure that their systems are transparent and do not inadvertently promote harmful behaviors.

Read also:

    Comments

    Latest