AI in the Workplace:

A Tool for Interdisciplinary Thinking or a Crutch?*

Artificial intelligence has entered the modern workplace with the force of a structural shift, not just a new technology. It is already changing how professionals research, write, analyze, and make decisions. As its capabilities grow, so does a fundamental question that every organization, and every individual professional, should ask themselves: is AI amplifying human potential, or quietly eroding it?

The honest answer is probably both, depending on how we use it. I’ve been thinking a lot about what may be the most exciting and underappreciated promise of AI in the workplace: its capacity to enable truly interdisciplinary thinking. I also worry about the potential ethical and cognitive risks that come with over-reliance on AI, and I’m curious to explore what responsible adoption actually looks like.

The Case for AI as an Interdisciplinary Bridge

For decades, one of the great frustrations of working on complex, real-world problems has been the silo problem. A public health researcher studying health equity may have deep expertise in epidemiology but lack fluency in urban planning, housing policy, or behavioral economics, all of which are critical to understanding why certain communities face worse health outcomes. A climate scientist may understand atmospheric chemistry but struggle to model the economic trade-offs of different carbon policy frameworks. A technologist building a platform for underserved communities may not have the sociological grounding to understand the structural barriers their users face.

The traditional solutions to this, including hiring teams of specialists or spending years developing interdisciplinary expertise yourself, are expensive, slow, and not always possible. This is where AI provides genuine potential. Rather than replacing experts, AI can function as an intellectual scaffold, letting professionals think beyond the boundaries of their own training.

A social worker trying to understand the regulatory landscape around housing assistance can now use AI tools to synthesize legal, policy, and economic frameworks. A physician thinking through the social determinants of a patient's condition can quickly gain an understanding of community resources, systemic inequities, and relevant research in real time. A nonprofit leader working on digital equity can rapidly develop policy arguments that draw from computer science, ethics, economics, and community organizing.

McKinsey's 2025 workplace research describes AI's potential as similar to a 'cognitive industrial revolution,' noting that the technology's long-term productivity opportunity across sectors may reach into the trillions [1]. But I’m less excited by productivity for productivity’s sake and much more interested in the potential of removing cognitive bottlenecks and allowing people to work across different domains other than their own. Increasing productivity in collaboration speaks much more to me than the dollar value of churning out more widgets.
With faster and easier collaboration, we have a better chance of addressing the most pressing challenges of our time. Climate change, health equity, and digital equity are inherently interdisciplinary problems without single-discipline solutions. AI won't solve them, but it may finally give individual professionals the tools to approach them with the holism they require.

The Ethical Risks We Cannot Ignore

The same capabilities that make AI exciting also make it dangerous when used carelessly. There are at least three serious risks worth naming directly.

Algorithmic Bias and Systemic Harm

AI systems are trained on data, and data reflects existing inequalities. A healthcare algorithm trained on historical data may systematically underestimate risk in patients from communities that have historically received less care. A recruitment AI may replicate the biases encoded in decades of non-diverse hiring. As a 2025 review in Frontiers in Digital Health notes, biases in AI systems can exacerbate injustice, erode accountability, and destroy individual autonomy [2].

The irony is sharp: AI tools deployed with the goal of advancing equity can, if poorly designed and ungoverned, actively reinforce the disparities they aim to address.

Transparency, Accountability, and the Governance Gap

According to KPMG's 2025 global survey of over 48,000 workers across 47 countries, 64% of employees admit to putting less effort into their work when they know they can rely on AI, and 58% rely on AI output without thoroughly evaluating the information [3]. More than half admitted to presenting AI-generated content as their own. These aren't just individual ethical lapses. They represent a systemic governance failure. When accountability is unclear and outputs go unexamined, errors compound and there is an erosion of trust. In the post-truth world in which we find ourselves today, this is deeply concerning to me. 

The Cognitive Cost of Over-Reliance

Perhaps the most underexplored risk is what happens to our thinking when we consistently outsource it. A 2025 Microsoft study on knowledge workers found that the more confidence workers placed in AI's ability to perform a task, the less critical thinking effort they applied themselves [4]. A study by Gerlich (2025) found a negative correlation between frequent AI usage and critical thinking abilities, driven largely by cognitive offloading, the practice of delegating mental effort to external systems rather than exercising it ourselves [5].

These findings suggest a troubling feedback loop: the better AI gets, the less we practice the cognitive skills that allow us to evaluate its outputs. And the less we can evaluate AI critically, the more we defer to it, regardless of quality. The risk isn't that AI will think for us, it’s that we'll let it and eventually lose the capacity to think for ourselves.

What Thoughtful Adoption Looks Like

I don’t want to give into what seems to me to be the next moral panic over AI, but I also think there’s reason to remain vigilant. So what does responsible AI adoption actually look like in practice?

Use AI to expand your aperture, not bypass your judgment.

The most valuable use of AI in professional settings is not to receive an answer but to rapidly explore the landscape of an issue. Ask AI to surface the perspectives, disciplines, and evidence relevant to a question. Then apply your own training, values, and judgment to weigh them. AI is a research assistant and thought partner, not an authority.

Maintain deliberate cognitive practice.

Organizations and individuals alike should be intentional about preserving the conditions for deep thinking. This might mean designating AI-free problem-solving sessions, requiring professionals to articulate their own reasoning before consulting AI, or building evaluation habits into any AI-assisted workflow. As the ANSI Institute's analysis of recent research notes, higher education and confidence in one's own judgment serve as meaningful buffers against the cognitive atrophy that heavy AI reliance can cause [5].

Demand transparency and govern accordingly.

The governance gap is real and urgent. McKinsey's 2025 survey found that only 39% of C-suite leaders use any benchmarks to evaluate their AI systems, and of those, ethical benchmarks are the lowest priority [1], but this is backwards. Organizations deploying AI in human-facing domains such as hiring, healthcare, criminal justice, financial decisions have an obligation to evaluate for bias, establish cadences of accountability, and ensure workers understand how AI tools are influencing decisions that affect real people.

Center the humans most affected.

Especially when AI is being applied to problems of health equity or technological access, the communities most affected need to have a voice in how these tools are designed and deployed. AI that claims to serve equity while excluding affected communities from its development is neither equitable nor effective. Can a tool that isn’t effective at achieving its goal even be called “intelligent?”

A Tool Worth Taking Seriously

The potential of AI to help us solve complex problems is real and significant. For the enormous challenges we face, from climate adaptation to health disparities to the digital divide, the ability to think across fields is essential.

But this potential is only realized if we use AI thoughtfully: as a scaffold for our thinking, not a substitute for it. The professionals who will use AI most effectively are those who bring deep human judgment to every interaction with it. Those who know what they're asking, can evaluate what they receive, and understand that no model, however capable, carries the lived experience, ethical responsibility, or accountability that comes with being human.


*Note: I used AI to assist me in writing this blog post. I had some ideas about what I wanted to focus on, points I wanted to make, and a lot of swirling thoughts I needed help bringing together. I asked Claude.ai to help me structure this post for clarity and conciseness and provide some references from relevant sources. You are reading a post that was collaboratively developed by a human (me) and an AI tool. 


References & Further Reading

[1] McKinsey & Company, 'Superagency in the Workplace: Empowering People to Unlock AI's Full Potential' (2025). mckinsey.com

[2] Frontiers in Digital Health, 'Biases in AI: Acknowledging and Addressing the Inevitable Ethical Issues' (2025). pmc.ncbi.nlm.nih.gov

[3] KPMG, 'Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025' (2025). kpmg.com

[4] Lee, H-PH et al., 'The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers' (2025), Microsoft Research. blog.ansi.org (summary)

[5] Gerlich, M., 'AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking' (2025). Summarized in ANSI Blog. blog.ansi.org

[6] Santoni de Sio, F., 'Artificial Intelligence and the Future of Work: Mapping the Ethical Issues.' The Journal of Ethics, Springer Nature (2024). link.springer.com

[7] MIT Task Force on the Work of the Future, 'Building Better Jobs in an Age of Intelligent Machines' (2023). workofthefuture.mit.edu

[8] Tandfonline, 'AI and the Future of Industrial Work: A Framework for Enhancing Employee Experience from Satisfaction to Flourishing' (2025). tandfonline.com


Strategic Roadmaps: Taking Your Strategy from Vision to Action

Kat Jarvis, Founder & Principal, Piñon Advising

I must confess to being a bit of a strategic planning nerd. Stakeholder analysis? Love it! SWOT analysis? I'm practically shaking with excitement. But here's the problem: a traditional narrative strategic plan rarely translates well into actual implementation. Enter the Strategic Roadmap—your solution to the "Now what?" predicament that plagues so many organizations after completing a strategic plan.

What is a strategic roadmap? Think of it as the nexus between strategy and implementation—a visual tool that enables your organization to track key outcomes over a defined time period. It's a strategy, implementation plan, goal tracker, and scorecard all in one.

You may be wondering why more organizations aren't ditching their narrative plans for this more actionable alternative. From my experience, there are two main factors:

  • Not every organization has a template or understanding of how to structure this type of document

  • Some organizations are afraid of accountability (and a strategic roadmap keeps you accountable)

Fair enough! Creating a strategic roadmap from scratch can be intimidating, and there's real pressure in being accountable for specific outcomes. While there are nearly infinite ways to structure your roadmap, here are the key components every one should include:

Mission, Vision & Objective – Your strategy should always be grounded in your mission and vision. A clearly stated objective provides a North Star for your plan.

Strategies – Describe how you're getting to your objective. What smaller goals are you pursuing in service to the overall goal? Strategies differ from tactics (specific actions), but together they form the path to meeting your objective.

Goals & Impacts – We prioritize what we measure. Metric-driven goals and expected outcomes are critical to an actionable roadmap. Activities you don't measure quickly become lost in day-to-day work. Setting these upfront creates a key accountability mechanism for implementation.

Roles & Responsibilities – How often have you built a beautiful plan, set goals, and started implementing before realizing no one knows who's responsible for what? The most detailed, metrics-driven plan will fade into oblivion if you don't assign responsibility for individual actions.

Timeline – Because a strategic roadmap is visual, you can lay out your plan on a timeline and see the sequence of action. Set realistic deadlines and use your roadmap to track them.

Strategic roadmaps are living documents you update as you track progress. Yes, they hold people accountable, but they're also excellent tools for knowing when to pivot. A strategic roadmap won't miraculously transform your organization—you still need strong communication, cadences of accountability, and buy-in from implementers—but it provides crucial framework and structure.

Next time your organization goes through strategic planning, consider how a strategic roadmap could help you through implementation. After all, what's the point of a plan if it never gets operationalized?

Dealing with Career Setbacks

Kat Jarvis, Founder & Principal, Piñon Advising

Two months ago, I was laid off from the City and County of Denver after five and a half years of service. Before joining the City, I spent 18 months networking and applying for 38 positions. Working in local government was my dream, and I pursued it relentlessly. I finally started on April 1, 2020—just two weeks after Denver’s COVID-19 stay-at-home order. The years that followed were challenging, meaningful, and deeply fulfilling.

When I learned my position was eliminated, I felt an unexpected mix of emotions. I’d already been navigating personal challenges, and while people say layoffs aren’t personal, they certainly feel personal when you’ve poured your heart into public service. My work is part of my identity, and I want my career to feel purposeful, not transactional.

So what do you do when you experience a major setback—especially when you care deeply about your work, the job market is unpredictable, and life isn't exactly smooth? I don’t have all the answers, but here are lessons I’ve learned:

Network with intention

Everyone says to network, but how you do it matters. Huge events aren’t always the best strategy. What has worked for me is intentional outreach, including the following:

  • Schedule informational interviews with people doing work you admire.

  • Tell your network what you’re looking for, and ask for introductions.

  • Go into meetings with a clear ask, whether it’s advice, a connection, or insight into a role or sector.

Stay humble, know your worth

When I anticipated my layoff, I applied broadly, including to the Governor’s Executive Internship Program, typically for students and early-career professionals. It wasn’t a conventional path for someone with my experience, and yes, it felt awkward at first. But it aligned with my five-year goal to gain statewide experience, and it helped me expand my network. I embraced being an almost-forty-year-old intern and contributed meaningfully. That experience ultimately helped me land a role with the State’s Office of Economic Development and International Trade.

Rest isn’t wasted time

Capitalism tells us to rise and grind. Unemployment told me to breathe. I initially sprinted through coffee meetings, interviews, and events, but I quickly crashed. So I made space to recharge: sleeping in, walking, reading, savoring slow mornings, and yes, the occasional afternoon glass of wine. Networking mattered—but so did recovery. Balance helped me stay motivated and grounded.

Protect your confidence

Shame hit hard. Answering “So, what do you do?” stung. The internship gave me structure, purpose, and opportunities to refine my story. Support from former colleagues reminded me of my value. Confidence shows up in interviews, networking, and decision-making, so invest in the people and activities that reinforce yours.

If you’re job-seeking, surround yourself with your hype team, reflect on your successes, and remember: someone needs exactly what you bring. Take a breath. Stand tall. Own your experience and your power.