Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have yet to receive critical attention: over-looking how auxiliary factors can impede LLM performance despite competence (Type-I), and dismissing LLM mechanistic strategies that dif- fer from those of humans as not genuinely com- petent (Type-II). Mitigating these biases necessitates an empirically-driven, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, which can be done by supplementing carefully designed behavioral experiments with mechanistic studies.