Palantir Technologies is continuing to use Anthropic’s Claude artificial‑intelligence model in certain applications even after the Pentagon reportedly categorized the model as carrying potential supply‑chain risk, according to remarks by Chief Executive Alex Karp highlighted in the Economic Times article titled “Palantir uses Anthropic’s Claude despite Pentagon’s supply-chain risk tag: CEO Alex Karp.”
The report from Economic Times describes how Palantir, a major contractor for U.S. defense and intelligence agencies, has incorporated large language models from multiple developers into its platforms, including its Artificial Intelligence Platform (AIP). Speaking about the company’s approach to integrating third‑party AI systems, Karp indicated that Palantir continues to evaluate models from across the industry based on performance and mission utility, even when government agencies flag potential sourcing concerns.
Anthropic’s Claude models have emerged as one of the leading alternatives to systems developed by OpenAI and Google. The startup, which counts major technology firms among its investors and partners, focuses heavily on AI safety and alignment. However, as the Economic Times report notes, U.S. national security agencies increasingly scrutinize the supply chains behind advanced AI systems amid growing geopolitical competition and concerns about data security and technology dependencies.
The Pentagon’s designation of “supply-chain risk” does not necessarily prohibit the use of a technology but signals that officials believe certain aspects of the underlying ecosystem—such as supplier relationships, infrastructure dependencies, or governance structures—require additional caution. Defense agencies have been tightening oversight of software and AI systems used in military and intelligence contexts as part of broader efforts to reduce exposure to strategic vulnerabilities.
Karp suggested that Palantir’s approach is pragmatic, emphasizing operational effectiveness and flexibility. The company has built its newer AI tools to function as orchestration layers that can incorporate different large language models depending on customer needs. That structure allows organizations to swap or combine models while maintaining strict controls over sensitive data.
The defense sector has become a major battleground for AI providers, with governments seeking advanced analytical tools capable of processing vast datasets while ensuring that classified information remains secure. Companies like Palantir have positioned themselves as intermediaries that allow public‑sector clients to experiment with cutting‑edge AI models without directly exposing core systems.
According to the Economic Times report, Karp argued that AI adoption within defense and national security institutions requires balancing innovation with risk management. While government agencies continue to evaluate vendors and impose safeguards, contractors may still use or test a range of models internally as they refine systems for operational use.
The episode underscores the complex environment developing around artificial intelligence in national security. As governments race to deploy powerful models, questions about supply chains, corporate governance, and geopolitical exposure are increasingly shaping procurement decisions. Companies operating at the intersection of Silicon Valley and defense are likely to face continued scrutiny as regulators attempt to ensure that critical technologies are both effective and secure.
