I have some questions to do with the ethics, dangers and security concerns of using AI.
- AI right now is incredibly bad for both the consumer computer market and the environment, as it is buying up massive amounts of computer parts and using massive amounts of water. Is supporting this worth gaining the, understandbly tempting, potential 60% of employees time to be reallocated to other tasks?
- What strategies would you put in place to stop the creep of AI taking over things it shouldn't? For example, if some employees use AI to generate an answer to a question, and don't proffread or fact check it.
- How would you deal with the potential cybersecurity threat posed by unregulated acess to LLM AIs? Many of them use conversations as training data, and if an employee put confidential info into it someone may be able to learn the data, or data that includes or references it. This, to my knowledge, can only be circumveneted by a locall LLM model, which may be unattainable for some leaders without the neccessary powers.
AI’s environmental and hardware footprint is very real, especially the power and water demanded by large data‑centre infrastructure, and recent UK work has warned that rapid AI‑driven data‑centre growth could seriously strain energy and water systems if it is not tightly regulated. So for me the 60–70% automation potential McKinsey talks about is only worth pursuing if leaders consciously reinvest that time into higher‑value human work, not just more admin, meetings and slideware.
On your “creep” point, I completely agree. AI should be treated like a bright but unreliable junior – useful for drafts and options, never for unchecked answers. In the teams I work with, the rule of thumb is: if you would not sign it without reading it, you cannot paste it from an AI without checking it either; leaders have to model that discipline or bad habits spread fast.
Cybersecurity is the bit that keeps many leaders awake at night. Public LLMs can log prompts and, depending on the provider and settings, may use that data to continue training, which is why security guidance flags risks such as confidential‑data leakage and sensitive‑information disclosure if staff paste in internal material. One answer is to use local or private models for high‑risk work, but just as important are clear policies on what never goes into any model, strong access controls, and regular training so people understand both the power and the limits of these tools.
Anyone studying cyber security now, is heading into a field that is only becoming more central as AI spreads through every part of organisational life. UK data already shows tens of thousands of cyber roles and a persistent skills gap, which means people who understand both security and AI will be in real demand over the coming years.
True. Leveraging AI is not just about using it as a tool
I have some questions to do with the ethics, dangers and security concerns of using AI.
- AI right now is incredibly bad for both the consumer computer market and the environment, as it is buying up massive amounts of computer parts and using massive amounts of water. Is supporting this worth gaining the, understandbly tempting, potential 60% of employees time to be reallocated to other tasks?
- What strategies would you put in place to stop the creep of AI taking over things it shouldn't? For example, if some employees use AI to generate an answer to a question, and don't proffread or fact check it.
- How would you deal with the potential cybersecurity threat posed by unregulated acess to LLM AIs? Many of them use conversations as training data, and if an employee put confidential info into it someone may be able to learn the data, or data that includes or references it. This, to my knowledge, can only be circumveneted by a locall LLM model, which may be unattainable for some leaders without the neccessary powers.
Hey Daniel. These are three excellent questions!
AI’s environmental and hardware footprint is very real, especially the power and water demanded by large data‑centre infrastructure, and recent UK work has warned that rapid AI‑driven data‑centre growth could seriously strain energy and water systems if it is not tightly regulated. So for me the 60–70% automation potential McKinsey talks about is only worth pursuing if leaders consciously reinvest that time into higher‑value human work, not just more admin, meetings and slideware.
On your “creep” point, I completely agree. AI should be treated like a bright but unreliable junior – useful for drafts and options, never for unchecked answers. In the teams I work with, the rule of thumb is: if you would not sign it without reading it, you cannot paste it from an AI without checking it either; leaders have to model that discipline or bad habits spread fast.
Cybersecurity is the bit that keeps many leaders awake at night. Public LLMs can log prompts and, depending on the provider and settings, may use that data to continue training, which is why security guidance flags risks such as confidential‑data leakage and sensitive‑information disclosure if staff paste in internal material. One answer is to use local or private models for high‑risk work, but just as important are clear policies on what never goes into any model, strong access controls, and regular training so people understand both the power and the limits of these tools.
Anyone studying cyber security now, is heading into a field that is only becoming more central as AI spreads through every part of organisational life. UK data already shows tens of thousands of cyber roles and a persistent skills gap, which means people who understand both security and AI will be in real demand over the coming years.
Thanks for the answers, you make some very good points.