Career Advice

I have benefitted from some great career advice and nowadays people often ask me about how to build a career in Responsible AI. Here are some ideas. The people who will find this advice most useful are those who want to emulate a career roughly like mine, starting out in academic philosophy and then transitioning to an ethics consulting role or research scientist position in industry. But several of the principles are generalisable to other Responsible AI career paths.


Table of Contents


1. Caveats

Take this advice with a grain of salt. For anything that you want to achieve, the best advice on how to achieve that thing will come from people who have achieved that thing countless times. For example, if you want to make a chair, you should probably get advice from a carpenter who has made chairs countless times. Such a carpenter is likely to have a good causal model of how input variables in the construction of chairs relate to the relevant outcome variables. Landing a dream job is a rare event for most people. Hence most people have pretty naff causal models of how to land dream jobs and massively underweight the role of circumstance and sheer luck. I think the best careers advice comes from experienced hiring managers, recruiters, bar raisers, and the like. They have seen it all from the other side of the table and have great causal models with respect to the features of applications and candidates that are conducive to success. In this regard there are many people who are better qualified to give careers advice than I am.

The salient point to look out for is that my advice will be tainted with survivorship bias. I’m not able to reliably discern what was down to my character traits as opposed to me being in the right place at the right time. Still, I think the advice may be helpful to at least some people. The most important lessons I have learned are from things that went very badly wrong as opposed to things that went right. Two examples: (1) I used to be complacent. In high school I received an offer to do my undergrad at Cambridge and then missed my offer due to not working particularly hard. Now I am conscientious because I understand that a significant part of being successful is hard work. (2) Early in my career I spent way too much time following academic trends and not enough time thinking about what was important to work on from a societal point of view. What initially drove me to research was prestige and the result was that I got prestige at the cost of my own satisfaction with the work I was doing. I now work only on things which I think really matter.

One final caveat is that this advice is not for everyone. Don’t take what I say at face value. Read it carefully, think about it and make up your own mind on the degree to which my advice applies to you and the circumstances in which you find yourself. It’s possible, for example, that you have a different risk tolerance to me or that you value job security more than I do. I suspect that you might get most out of the advice by making precise in what respects my advice fails to apply to your situation. And that’s great. The whole point of writing this stuff down is to help you get to where you want to go, and it doesn’t matter to me whether you use it as an instruction manual or as material to rip the shit out of in order to formulate much better advice for your situation.

2. First Steps

This section is for people who don’t currently work in Responsible AI but would like to. What I’m trying to do is characterise the set of options available to people who are looking to break into the field.

There are many different career paths in the field of Responsible AI. They include academia, industry research, ethics consulting, policy and governance, community building, auditing, trust and safety, education, fundraising, legal and compliance, product management, and more.

Different roles have different requirements. For example, industry research typically requires a PhD and publications in top journals or conferences, whereas community building typically requires experience organising events. Some roles require a technical skillset. These include many research scientist and research engineer positions in industry, although there are some rare research scientist roles in industry that focus specifically on ethics or social science. Other roles do not require a technical skillset such as policy and governance and ethics consulting, although these roles often require a strong conceptual understanding of AI. To get a handle on the different job roles out there and what skills are needed, take a look at the 80,000 Hours and All Tech is Human jobs boards.

It’s also fine to experiment with different roles. Experience is often more cumulative than people think and the boundaries between distinct kinds of roles are often porous. For example, I started out in academia, became a bioethicist, and then transferred to a research scientist position. One of my collaborators started out in front-end software development, switched to user experience research and now researches machine cognition and consciousness. Another collaborator was a program manager who started running ethics workshops for engineering and product teams, and used that to pivot into an ethics and policy research role. I think it’s fair to say that most people in Responsible AI are making it up as they go along and you should do the same.

To break into Responsible AI you should start doing Responsible AI stuff. You could pilot an ethics workshop or start an ethics speaker series in your workplace, for example; and if you are at university, you could create a Responsible AI society or host a Responsible AI hackathon. The point is to create leverage: stuff that demonstrates competence and fit for Responsible AI internships, jobs, degree programmes, and the like.

It may be possible to create enough leverage from your current position to apply for your dream job directly. If so, great. But if not, I’d recommend taking a hill-climbing approach in which you get incrementally closer to where you want to be via intermediate steps. These intermediate steps can be jobs. For example, if you are a coder and you want to work on ML fairness, maybe apply to be a software engineer on a trust and safety team first, and then use that position as leverage to apply for your ideal role as a research engineer on a dedicated ML fairness team. But intermediate steps can also be qualifications. I think the MSt in Practical Ethics at Oxford and the MSt in AI Ethics and Society at Cambridge are excellent courses for people who want to pivot. There are also organisations like Blue Dot Impact who run courses to help people break into the field. I’ve also noticed some universities offering short non-degree courses in AI ethics, such as the London School of Economics, although I’m uncertain how effective these things are for getting jobs.

It may even be worth doing a PhD. Whether or not to do a PhD is an extremely personal decision. They can be quite long and the pay is terrible (if funded) or negative (if unfunded). Even so, PhDs can be extremely rewarding. Doing a PhD is typically the best way to land an industry research scientist position, especially if integrated with internships at tech companies, and it’s also necessary for pursuing an academic pathway (i.e. a postdoc or a tenure track assistant professorship). If you do end up doing a PhD, I’d advise you to hit the ground running, publish extensively and present at high profile conferences. Overall seek to emulate the behaviour of an experienced academic. Merely having a PhD is insufficient for landing an industry research or academic position. It’s not even close. So if you are going to take this route, it’s important that you do so with the explicit intention of using the time to build an extensive research portfolio.

Last, I’d recommend getting some tailored advice from 80,000 Hours or getting a mentor through the All Tech is Human mentorship programme. Generic advice is only so helpful.

3. Finding a Niche

Responsible AI, as a field, is not yet at a level of maturity where career paths are well-defined. The people who work in Responsible AI often carve out niches for themselves and bend the scope of pre-established roles in a Responsible AI direction. The point is that you need to be entrepreneurial, although this is liable to change as the field becomes more mature. Here are three pieces of advice for carving out a bespoke career path for yourself within Responsible AI.

3.1. Understand Skill Arbitrage

You can greatly increase your value on the job market by acquiring cheap skills in one domain and selling them in another domain where those skills are in short supply and high demand.

For example, basically everyone who studies philosophy at the graduate level can formulate arguments, explain difficult concepts, articulate competing views and assess their costs and benefits, and give a half-decent presentation. Having these skills in academic philosophy is table stakes. The discipline is saturated with people who possess these skills to a high degree. Hence the mere possession of these skills is unlikely to get you very far. On the other hand, the skillset of a reasonably good philosopher is exceptionally rare in industry research and ethics consulting, and these skills are in high demand. Provided you can market your skills in the language of these other fields so that people understand how you satisfy their needs, the scarcity of philosophy skills in these other fields can make you an extremely valuable asset.

An important corollary to this point is that you can massively increase your value on the job market by acquiring rare combinations of skills that are in high demand. For example, the set of people who are competent AI policy specialists and competent software engineers is very small. So, if you are an AI policy specialist you can greatly increase your value on the job market by learning to code at least reasonably well. That puts you in a very rare skills bucket and enables you to engage in high-value work like translating policy ideas into engineering practice and interfacing between policy and engineering teams. My general advice in this regard is to pick a base skillset (e.g. policy) and then acquire 1-2 other skills (e.g. ML coding and specialist knowledge of CBRN risks) such that the combination of skills is in high demand and the set of people who share the relevant combination of skills is less than ~10 globally. This strategy requires you to focus on getting bespoke jobs but the payoff in terms of market value is definitely worth it.

3.2. Don’t Follow the Crowd, Anticipate It

You ideally want to do good work on topics that are massively significant. Such topics matter in their own right, but are also instrumentally valuable insofar as a lot of useful stuff follows from doing good work on these topics (e.g. funding, citations, job opportunities). Where most people mess up is in chasing trends rather than anticipating them. Fields can very quickly become saturated as everyone rushes to get involved in the next big thing, and running into a saturated field is essentially a career cul-de-sac as the supply of labour exceeds demand.

What set me up in Responsible AI was doing a PhD on autonomous vehicle ethics between 2017 and 2020. The field was nascent at the time, which allowed me to make a name for myself quickly while also having access to cool opportunities like engaging with policymakers and giving talks at big conferences. Becoming a key player in autonomous vehicle ethics was instrumental in getting me a postdoc at Stanford in 2020-21. Stanford provided me with a professional network that made it impossible to not realise that LLMs were the next big thing, so I was able to pivot my research agenda towards LLMs and ultimately LLM-based agents, and ride that wave also.

My mental model here is one of making exponential progress by stacking diminishing returns curves. The first handful of researchers in a field see outsized impact in shaping the trajectory of the field and then the marginal impact of each additional researcher diminishes rapidly. Hence one way to make exponential progress is to catch the steep part of the curve in one emerging field, then use the leverage gained in that field to pivot to another emerging field, and catch the steep part of the curve there also. This process can be repeated in a rapidly evolving technological landscape. One big point of caution here, as I mentioned in the Caveats section, is not simply to chase the next big thing, as that will result in meaningless status chasing. In hindsight I think the entire field of autonomous vehicle ethics was largely a waste of time. But the high-impact topics that I’ve worked on more recently like AI welfare and the ethics of AI agents are in my view much more likely to have a positive societal impact over the long run.

There’s also obviously an element of risk here. Forecasters can badly misjudge the direction that things will go in. Three points in response: First, as with investing, diversification is important. You don’t have to go all in on any particular area. In my own career, I’ve invested heavily in some high-risk high-reward areas (e.g. AI welfare), but I’ve done that alongside substantial investment in much safer bets (e.g. ML fairness in healthcare). I believe that AI welfare will be a massively important public conversation. But I could be mistaken, and if I am, I have maintained active participation in other more conventional literatures that I can lean back into if needed. Second, don’t underestimate your agency in making big things happen. For example, whether or not a literature that you’re contributing to becomes a big deal is in part within your control. Doing great work will attract great people. Third, with the exception of frontier ML work (in which forecasts are typically based on extrapolating empirical regularities that may or may not continue to hold indefinitely), forecasting in Responsible AI is essentially a matter of information arbitrage because most fields lag behind the technological frontier. There is, for example, a 1-3 year time-lag between a particular ML development and philosophers starting to discuss that development in the literature. So simply being ahead of the curve with respect to the philosophers and on the curve with respect to ML is usually sufficient. But this time-lag is probably shrinking.

3.3. Leverage Branding

Did I mention that I went to Stanford? I appreciate that this sounds like extremely crass advice, but it is the case that being affiliated with prestigious universities and companies makes a huge difference to your career. I first started thinking about this point when I was doing my PhD at Bristol. For context, Bristol is a good university in the UK, but it is not Oxford or Cambridge. I noticed that the placement record for Bristol’s PhD programme (that is, a list of the places that PhD graduates from the relevant department end up getting jobs) was not great. Many people did not get jobs and those that did get jobs typically ended up at comparable and often lower ranked universities than Bristol. I realised that I needed to get myself in a better reference class before hitting the job market and it seemed like the easiest way to do this was to get a better university affiliation. So, I applied to a research assistant position at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and worked extremely hard to ensure that I got the position. (That is, I prepped for interviews like my life depended on it.) Fortunately, I got the job and was able to apply to postdocs at Stanford and Oxford with a dual affiliation.

Once I got to Stanford, things felt qualitatively different. There is a massive credibility boost and people take you extremely seriously. I’m not sure that I would have gotten into Google had I not been applying from Stanford (or a relevantly similar university). Minimally, I’m sure it helped a lot. Being at Google has a similar effect. Once again, I don’t like the fact that this is true, but I think one of the best pieces of career advice I can give is to barnacle yourself to a high-status institution. (For what it’s worth, there are also hacky ways to do this like getting some kind of affiliate position at a prestigious university, which may be high-status to people outside academia but academics will typically give minimal credibility to these positions.)

I also want to register that these big-name universities and companies can seem out of reach. It’s extremely difficult to get into Cambridge, Stanford or Google, and I don’t want to trivialise how much effort is required to make these leaps. Much of what I say in the Job Applications section helps to demystify the process of breaking into these top-tier institutions.

4. Networking

I am autistic and therefore frightened of people. Hence networking is not something that I am particularly good at and not something that I have invested a huge amount of time doing. But I think my aversion to networking has helped me make a stark assessment of the cost-benefit profile of networking with respect to career growth. The advice here is specific to research careers in industry and academia and may not generalise to network-heavy career paths like community building, advocacy and ethics consulting.

4.1. Everyone Networks

The basic problem with networking is that (almost) everyone is doing it. Here I’m talking about the kind of networking that involves sending cold outreach messages on LinkedIn or staying behind after someone gives a talk to introduce yourself. If you are at Career Level 1 and you want to network with people at Career Level 2, you need to realise that each person at Career Level 2 has almost everyone from Career Level 1 trying to network with them. That makes it extremely difficult for Career Level 1 folks to stand out from the crowd. It is, of course, possible to stand out from the crowd. But what that takes is significant time investment, both in terms of networking to a broader set of people at Career Level 2, but also making sure that the individual networking instances are well-tailored (i.e. you need to research each person who you are trying to network with, know their work inside out, and make a really excellent pitch to them).

4.2. Opportunity Costs

Because networking well requires a significant time investment, it carries a massive opportunity cost. Time spent networking is time not spent reading or writing or building stuff. My view is that it’s better to invest the time growing your skills and creating value than it is to invest that same time networking, all else equal. To be sure, the probability of a successful networking instance (i.e. introducing yourself to someone who ends up tangibly benefitting your career) is much higher when you deliver them something of value. For example, you could read a study, adapt it in some novel way, and then email the researcher who led the study about what you did. That’s a tantalising hook, and the kind of thing that might land you an internship. This is a much better way to network than sending out hundreds of non-tailored outreach emails.

It’s also worth noting that networking is not the only way to attract the attention of higher-status people. I’ve found that if you do good work and put it out there (e.g. presenting at conferences and publishing papers), then people will come to you. Here what’s important is that each time you present your work at a conference, you give a presentation that’s so obscenely better delivered and better thought-out than the other presenters that higher-status people can’t help but to network with you. Ditto for writing excellent papers. That’s not to say that the occasional well-targeted email to a high-status person isn’t worth doing. But really the best way to network is to become so valuable that other people want to network with you.

4.3. Aim Low

Despite the above, I think it’s important to keep your eyes peeled for opportunities to engage higher-status people in conversation. For example, at conferences, I think it’s generally a good idea to attend the conference dinner and any drinks associated with the conference. That’s a good way to talk to higher-status people, and even if they don’t give you an amazing career opportunity, they may well give useful career advice or feedback on stuff that you’re working on.

Where many people mess-up is thinking that only the very highest-status people are most worth engaging with. For example, basically everyone at NeurIPS tries to get a selfie with Yann LeCun, but getting a selfie with Yann is unlikely to translate into serious career opportunities. Hence a really good idea is to target those who are somewhat higher-status than you and might realistically be in a position to help with your career growth. For example, if you are a PhD student, talking to one of the postdocs of the professor whose lab you want to work in would be an excellent strategy. The postdoc may have informative career advice and may also be able to advocate for you to the professor.

To spell out this idea a bit further, there are, at least, three good reasons to concentrate on somewhat higher-status people as opposed to exceptionally high-status people. First, the somewhat higher-status people have fewer people networking at them, so it’s easier to get their attention. Second, somewhat higher-status people often have a more nuts-and-bolts picture of the current job market because they have been through it recently. That’s not to say that very high-status people lack that understanding. Some have an exceptionally clear picture of the job market. But I find that somewhat higher-status people (e.g. PhDs, postdocs) often have useful information about the on-the-ground reality of applying to jobs in their field. Third, somewhat higher-status people are often willing to dedicate more time to up-and-coming people, at least compared to exceptionally high status people. The latter are generally very busy.

4.4. Second-Order Networking

Second-order networking is the idea of getting other people to network on your behalf, rather than doing the networking yourself. I’ve found this to be one of the most powerful forms of networking. Basically every high-status person has a trusted ‘inner circle’ of colleagues, such that if those colleagues vouch for someone, they will at least be willing to talk to the relevant person. The people in the ‘inner circle,’ in turn, have their own ‘inner circles.’ Hence an extremely effective strategy is to gradually fall into the orbit of high-status people by giving a clear value-add to second-degree ‘inner circle’ contacts, so that a second-degree ‘inner circle’ person vouches for you to a first-degree ‘inner circle’ person, who in turn vouches for you to the relevant high-status person. Here I cannot stress enough that the strategy will not work unless you are providing clear value-add to each person in the chain. Vouching for someone is a big deal because it takes up political capital, i.e. if someone vouches for you and you are a dud, it makes them look bad and gives them less leverage with the high-status person in future.

4.5. Small Wins

Many people get disillusioned with networking because they want some particular outcome (e.g. an internship), and no matter how hard they try nobody that they talk to is willing to give them that outcome. But higher-status people can give you all kinds of things that, while falling short of the desired outcome, are nevertheless conducive to the realisation of that outcome. You might get important information that some high-status person is a bad PhD supervisor, for example, where failure to get that information could easily have set your career back by many years or totally fucked it altogether. You might get important information about the politics of Responsible AI. It matters, for example, that you understand the role of the Effective Altruism movement in funding various AI safety and governance initiatives, and how getting involved with the EA community can open certain doors, but also close others. You might even get an intelligent steer on a research project that makes the difference between a publication in FAccT and a devastating series of rejections that makes you give up on Responsible AI. The lesson is that even seemingly small things can have big consequences with respect to your desired outcome over the long term.

One thing I wish I’d known when I was starting out is the information density with which high-status people communicate. People who are extremely successful are judicious with what they say and tend to speak with extreme precision. If such a person is telling you something, there is a good chance that what they are telling you is extremely important, even if it is not immediately obvious to you how or in what way it is important. This is not to say that you should believe the things that extremely successful people tell you without critical reflection. Rather, the point is to listen very carefully to what they say because the average information per word spoken is much greater for extremely successful people than it is for the general population. For example, a VP at Google who I respect massively once said to me that ‘if a person more junior than you is motivated to do something, just get out of the way.’ (This was in the context of me having just got in the way of someone who was independently motivated to do something.) I didn’t appreciate in the moment how valuable this feedback was, but after stewing on it for a few months I came to realise that in eighteen words he’d communicated the principal difference between good managers and bad managers in high-agency environments.

4.6. Gradient Ascent

This advice may be unhelpful because the whole point of networking is to find a good job. But one of the best ways to network is to work at a place with great people. Then it’s hard not to network as you naturally exchange ideas with colleagues. The actionable version of this advice is to gradually improve your network by gradually improving your job. Landing positions in orgs that are closer to where you would like to be will definitely increase the chance of bumping into people who can connect you with the kind of individuals that you ultimately want to speak with. The degree to which my personal network grew once I landed roles at Cambridge, Stanford and Google is hard to overstate. These places are bursting at the seams with amazing people.

One related piece of advice is that there’s stuff you can do in the workplace to leverage the people around you as helpful contacts. First, create opportunities for others. You want to be like the workplace version of Oprah: you have an opportunity, and you have an opportunity… When you help other people, it’s likely that they will help you back at some later time, and even if that person lacks the leverage to benefit you directly (e.g. because they are more junior), they will probably say nice things about you which makes better-leveraged people want to help you. Second, be very competent. You can get surprisingly far by promising to deliver something high-quality in a given timeframe and then actually delivering it in that timeframe. Third, you can communicate to more senior colleagues ways in which you are looking to grow your career. Senior people have often benefitted from mentors and are often happy to become mentors themselves. If your workplace is not like this, I recommend that you find a better one.

5. Job Applications

The process of applying for jobs in Responsible AI is quite mysterious. What looks like a well put together process on the outside may be much more arbitrary than it appears, and hiring managers ought to be understood as ordinary people with flaws and biases. Hiring is not fair. Often it’s disorganised and in some cases the entire process is a front for hiring a pre-selected candidate. I’m sorry that this is the situation. Here are some tips to navigate the process.

5.1. Quality, not Quantity

Jobs in Responsible AI often receive many hundreds of applications. These numbers make the job market look more competitive than it is. In reality, the vast majority of applicants are not even remotely suitable for the relevant roles. Serious candidates often make up ~10% of applicants. Hence it’s pointless to send out a generic CV and cover letter to hundreds of roles that seem like they might be a good fit. You will not beat a candidate who has invested that same time in finding a role that is a perfect fit for their skillset and crafting the perfect application.

You obviously cannot guarantee that you will get any particular job. But I think you can ensure that you reliably end up in the top 5-10% of applicants for the roles that you’re applying to, and by consistently applying to such roles you will eventually land a job. To be clear: Consistently ending up in the top 5-10% requires being extremely selective about which roles you’re applying to. To illustrate, in the final year of my PhD, I applied to exactly two postdocs: one at Oxford in which I came second and one at Stanford where I got the job. While there was some amount of luck here, I had extremely strong application materials and was well prepared for interviews. I didn’t waste any time applying to any moderately-good-fit roles. Rather, I spent all of my time crafting the best possible application for positions that were perfect for my skillset.

But wait, isn’t this a risky strategy? Yes, kind of. The basic trade-off here is analogous to the bias-variance problem in ML. You can make yourself an obscenely good fit for a particular kind of role, but then you will not generalise well to other roles, and so will be reliant on specific kinds of roles coming up or even on the creation of bespoke roles. The alternative is to be an OK fit for many different roles in which case you will be far less likely to succeed in any particular application but will be able to apply to more roles. How you adjudicate this trade-off is a matter of personal preference and I’d recommend pursuing my strategy only if you are willing to put in the work to reliably end up in the top 5-10% of applicants in a small handful of roles. My hunch is that if you want to be massively impactful it is best to go all-in on something specific. But the advice that I’m giving here could easily be tainted by survivorship bias, so take it with a massive grain of salt. To steel man the other side: it’s possible that the vast majority of people who tried my strategy failed.

5.2. Honest Signals

Recruitment platforms are like an extremely dry version of Tinder where hiring managers are forced to swipe through hundreds of profiles in which applicants present highly curated material about themselves in the hope of landing a first date. To stand out you need the hiring manager to gasp with delight when they see your application. Here are three tips on how to do this.

First, see above. If the role is only a moderately good fit, it’s generally not worth applying to. Second, the way to distinguish yourself is honest signals of extreme competence and fit. For example, to land an industry research position in ethical AI, I’d recommend publishing in FAccT, AIES or CHI, and to land an industry research position in technical AI safety I’d recommend NeurIPS or ICLR. I get that this is a high bar. But that’s exactly the point. To stand out you want a no-ifs-no-buts demonstration of your competence and fit, i.e. a signal that can’t easily be faked. (Relatedly, I tend to assume that the last two years of a candidate’s output are a good predictor of the next two years, so quality and quantity of recent output is important.) Third, ideally you want to know who the hiring manager is, and tailor the application to them. This doesn’t mean embellishing your credentials or being sycophantic. The idea is rather to appreciate that the hiring manager is looking for someone to work with. Frontloading features of your portfolio that are likely to appeal to them (such as publications on a topic that they have published on) is a good way to pique their interest.

There is also a related point which pertains to what to emphasise in CVs and cover letters. My mental model is that you want to showcase the marginal difference between you and other candidates, i.e. what do you have that other people don’t have, relative to the needs of the hiring manager. Everything else is essentially noise. For example, your BSc in Computer Science is probably not the most interesting feature of your application to a research engineer role. Like 80% of the people applying have BSc degrees in Computer Science, so it’s a waste of signal to overcommunicate that aspect of your portfolio. In contrast, your random side project that builds on the hiring manager’s work will really make you stand out. Double down on stuff which distinguishes you given what the hiring manager cares about.

5.3. Uninformed Hiring Managers

The advice above assumes that the hiring manager knows what they’re doing. This is a fairly safe bet for companies that already have Responsible AI teams of the kind that you want to apply to. But it’s not a safe bet in general. It’s possible that the role to which you’re applying is the first ever Responsible AI role in the company, or that it’s the first Responsible AI role of a particular kind, or that the company has Responsible AI people of the relevant kind but the org that’s hiring doesn’t know about them or their hiring practices. When this happens, the process by which applicants are evaluated can be totally insane. In the worst case, we’re talking about humanities and social science PhDs being forced to attempt coding interviews despite never having attempted a LeetCode problem. But the more common failure mode is where the hiring manager has a preconceived idea about what Responsible AI people do that’s badly confused. For example, the hiring manager might think that they need an ethical framework, and so their rationale for hiring an ethicist is to bring in an expert who can make an ethical framework. The hiring manager may in turn fail to realise that ethics consultants are the kinds of ethicists who make frameworks and instead advertise the role as being for academic moral philosophers (where academic moral philosophy is a distinctly frameworkless discipline). Hence the interview process may involve a bunch of questions about frameworks which are totally miscalibrated.

My advice for these situations is to assess in your initial screening call the degree to which the company knows what they’re doing. It’s good to go in with calibrated expectations about the delta between your actual skillset and the skillset that they think someone like you should have. If the hiring manager or recruiter suggests doing something that you think you will fail at, e.g. a coding interview, I would advise strongly against doing it. It’s better to withdraw on grounds of bad fit than it is to waste time going through a miscalibrated interview process. Provided the miscalibration is mild (e.g. confusion about what bioethicists do), I’ve heard people successfully employ one of two strategies. The first is to own your expertise and explain to interviewers that their questions are miscalibrated, and then provide a kind explanation of what you actually do. The other strategy is to be quick on your feet and answer the questions as best you can. Many people I know have successfully landed jobs using both strategies. I have no clear signal about what’s best. It may be the luck of the draw.

The fact that some hiring managers don’t have a clear picture of your skillset can also be an advantage. I’ve had and heard about some really excellent interview practices in which the interviewer simply asks the candidate what kind of value they bring to the table. This kind of honest open-mindedness and willingness to learn about your skillset is an excellent signal.

5.4. Creating an Ideal Role

It can make sense to get your foot in the door at a company by landing a job that you are qualified for and then working from that platform to carve out your ideal role. This is essentially the strategy that I employed at Google. The research scientist position that I currently hold did not exist at the time of my application (although there did exist one or two similar positions at DeepMind). I initially applied for a bioethicist position at Google Health which was specifically targeted at academic philosophers. The bioethics role was at least on paper a good fit for me. During my PhD, I worked as a research assistant at Cambridge, and the focus of that position was explainability for medical AI systems. I also had one forthcoming paper on ML fairness in healthcare and two further health-related papers in the pipeline, alongside two bioethics papers in the Journal of Medical Ethics that I’d published as an MA student. So while bioethics was not my core focus, I had enough signals of competence and fit to successfully land the position.

Becoming a bioethicist was an enormous learning experience for me. While bioethics is in some respects a part of philosophy, I think it’s better characterised as a philosophy-adjacent field that largely operates with its own playbook. Bioethicists publish in different journals and go to their own conferences and often operate in medical schools rather than philosophy departments (this is not to discount the fact that some exceptionally good work in bioethics happens in philosophy departments). There was, accordingly, a whole world of stuff I didn’t know about. This included basic clinical knowledge and also knowledge of research ethics like how IRBs work and how to evaluate the ethicality of proposed research studies. I invested a tremendous amount of time in upskilling so that I could practice the role effectively, and my training as a philosopher was very helpful in providing a solid intellectual foundation on which to learn about bioethics. But I was also the person on the team who had the deepest understanding of ML and moral philosophy, and I leveraged this to construct a niche as a bioethicist with a unique and valuable skillset.

Still, I remained a bioethicist for only 18 months. Several factors contributed to my decision to find a new role. First, my husband and I had moved out from the UK to the Bay Area. I was on an O1 visa and he was on the spousal O3 visa, which meant he couldn’t work until the Green Card process was completed (which even on an O1 would take a while). For this reason, we decided that it would be best to move back to the UK. Second, a few months in, I realised that research would be a relatively small part of the role that I was doing. I applied thinking that the role would be mostly research and have a small component of ethics consulting. But it was the other way around. I learned a tonne of stuff doing ethics consulting and I’m hugely grateful for the skillset that I picked up there. To start with, it gave me wonderful access to hundreds of teams across the company, so I was able to internalise the structure of Google and also have a clear sense of the portfolio of technologies that were being developed. I also got to interface with a number of senior stakeholders and to advise on some consequential decisions. But ultimately research is what I wanted to do.

Third, the whole point of moving to Google was to work on the ethics of frontier AI systems, and despite my protestations Google Health was not yet AGI pilled. The focus was on conventional AI systems like CNNs for medical image classification. So I became a bit like a bee trapped in a jar trying to work on cool stuff but lacking the institutional support to do so. This is an extremely bad position to be in from a career growth perspective. You want the backing of senior folks otherwise you’re going absolutely nowhere. Fourth, I was concerned about layoffs. Meta was doing layoffs and it seemed plausible that Google would follow suit. My algorithm for assessing layoff risk is to ask, first, ‘of the teams in my immediate org, which is least critical?’. Then ask: ‘of the orgs in my meta-org, which org is least critical?’. Then: ‘of the meta-orgs in my meta-meta org, which is least critical?’, and so on until you hit the basic divisions of the company. I thought I was at risk because health was a side-quest of Google and bioethics was a side-quest of health.

So began the extremely stressful process of trying to find another job within Google. I felt that layoffs were imminent, and to make matters worse basically nobody had headcount due to a company-wide hiring freeze (note: a common precursor to layoffs). There was one role going at DeepMind for an Ethics Research Scientist focusing on genomics. This role was a bad fit owing to my complete lack of knowledge of genomics, but I applied anyway and was rejected after interviews. This fuck-up is what convinced me that you should only apply to roles if they are a (near-) perfect fit.

Fortunately, I got lucky. Since joining Google I’d made a concerted effort to get involved in the broader ethics ecosystem beyond my immediate team. In particular, I became a contributor to Google’s Moral Imagination Program, running ethics workshops for research and product teams across the company to help them negotiate ethical tensions in their work and translate those discussions into actionable responsibility objectives. One of the leads from that program stuck their neck out for me and asked their Director if they could transfer me over to Google Research. That got me a meeting with the Director, and despite the fact that a fire alarm went off in the middle of our meeting, I made a good impression and she vouched for me to her VP. That got me a conversation with the VP (c.f. second-order networking). Turns out the VP was really into philosophy, and he green-lit a transfer. I then interviewed for a bespoke research scientist position in philosophy which was essentially my dream job, and arranged ahead of time for the role to be based in London so that we could move back home.

I started the new role in Google Research on Monday 16th January 2023. On Friday 20th, Google did its first ever massive round of layoffs and the entire org of which I was previously a part was cut. Had I started one week later I would have lost my job. Machiavelli talks about the twin forces of virtue and fortune. I think this ordeal involved a healthy dose of both. It was within my control to anticipate that layoffs were imminent, build a strong network of advocates beyond my immediate team, and be so obviously competent that hiring me was a no-brainer. It was also in my control to initiate the transfer as soon as possible, which was needed because transferring me in the context of a hiring freeze took six months all-told. But I also had the fortune to meet the right people at the right time, for critical conversations to go well, and for my start date to be ahead of the layoffs. I’m not sure that what I have said here constitutes a playbook for carving out a dream job within a tech company, but it is certainly one way to do it.

6. Getting Promoted

My advice on promotion speaks mostly to industry roles. For context, I joined Google as an L4 Bioethicist on a Trust and Safety ladder and switched to Research Scientist after 18 months. When I became a Research Scientist, I was promoted from Research Scientist (L4) to Senior Research Scientist (L5) within 10 months, and then promoted to Staff Research Scientist (L6) 18 months after that. Hence I ended up as a Staff Research Scientist at age 30, which is rare but not unheard of at a company like Google. It took a while to get a clear sense of what’s involved in promotions, but once I was on the right ladder I managed to advance fairly quickly.

6.1. Impact

People always say that to get promoted you need to have impact. What makes this guidance unhelpful is that ‘impact’ is a thin concept. It’s a bit like telling someone in the midst of a moral dilemma to do the right thing. Obviously, they should do the right thing. But ‘rightness’ is too thin as a concept to guide action. Making sense of impact in Responsible AI is more complicated still because many Responsible AI roles are focused on stopping bad things from happening, and it’s difficult to evidence subjunctive conditionals in a promo packet (e.g. ‘were I to have acted differently, some bad thing would have happened’). Furthermore, even if you achieve a positive outcome rather than preventing a negative outcome, the outcome variables with which we are concerned in Responsible AI can be rather elusive and hard to measure (e.g. fairness).

The first thing to understand is that the concept of impact, as it features in tech promotions, is not designed for Responsible AI people. It is designed for software engineers who can have obviously measurable impact (e.g. I implemented feature F which improved metric M by X%). This is unfair but it’s something we have to deal with.

The second thing to realise is that demonstrating impact in Responsible AI is about storytelling. Basically, you need to define a problem, solve it, and explain the value of your solution relative to the objectives of the org in which you are embedded. Here’s a three step guide.

First, you can be quite creative with respect to problem definition. Perhaps your ethics advising function is spending too much time in a reactive mode, giving bespoke feedback to individual teams, rather than investing in an anticipatory strategy which seeks to forecast what kinds of advice will be needed in future and doing the research to provide well-thought-through advice ahead of time. Perhaps product policy folks lack a clear direction on a novel emerging issue like human-AI relationships. Perhaps many people in your org are worried about the direction of travel and lack an outlet for their concerns. Perhaps ethics consultations are happening only in pre-launch reviews and thus creating friction with teams who are under pressure to ship.

You can also be creative in developing solutions. Try piloting a horizon scanning initiative to anticipate future demand for your ethics consulting function. Then if the pilot shows clear value, perhaps you can scale it into a dedicated workstream. Maybe you can help product policy teams develop a human-AI relationships policy by writing a literature review of ethics work on the topic or even initiating a novel research project to explore the issue. Perhaps you can provide an outlet for org-wide concerns about the direction of travel by creating an ethics forum in which people can exchange ideas about their concerns and translate them into a distinctive set of ethics OKRs for the org. And perhaps you can make an early-stage ethics intervention to help teams in the ideation and planning stages of projects to reduce the friction that you’re observing in pre-launch reviews. These are all potentially impactful solutions to well-defined problems.

The last thing to nail is measuring the value. The best advice I can give here is to have a clear idea of what success looks like before embarking on any particular adventure. Consider the horizon scanning example. Perhaps what you need here is a system for classifying whether particular ethics consultations pertain to ‘new issues’ or ‘issues for which stock advice exists.’ Then maybe it makes sense to develop a bank of ‘stock advice’ to which the horizon scanning initiative can contribute. Moving forward, you could then track the fraction of consultations that are covered by the ‘stock advice’ developed as part of the horizon scanning initiative. If you can show that you successfully anticipated a bunch of upcoming consultations and were able to provide well-thought-out advice, then the horizon scanning initiative has shown positive impact.

Tracking impact for research is a bit easier. Did you write a paper? Did it get accepted to a top journal or conference? How many citations did it get in its first year? Did the research get any media coverage in well-known news venues? Was the research used as a basis for evals or product policy? Did your Director, VP or SVP write any positive feedback about the work? You can draw on all of these things and more in telling a story about the impact of your research.

For the example of creating an org-wide ethics forum it again makes sense to have a good idea of what success looks like. Translating a diffuse set of ethical concerns into OKRs for your org is impactful. But it’s also possible to gather feedback from participants. What was the nature and quality of their concerns before the forum was set up, and to what extent did the forum result in outcomes that addressed their concerns? Qualitative feedback from senior stakeholders is also helpful. If your VP said it had a positive impact, that’s great material for your promo packet.

For the example of creating an early-stage ethics intervention, again having some tracking infrastructure in place is useful here, as with the horizon scanning initiative. Perhaps you could assess the degree of preparedness of teams at point-of-contact with pre-launch ethics reviews. Did the team come totally unprepared for an ethics review or were they ready and waiting with a solid set of anticipated risks and well-thought-out mitigations? In a similar vein, you could also track the complexity of the changes that teams need to implement post-review. Then you can A/B test with the aim of showing that teams who have an early-stage intervention in general are more prepared for pre-launch reviews and require less complex changes prior to launch.

The overall lesson here is that there’s many ways to show impact, but you need to create the measurement infrastructure to demonstrate the impact that you’re having. This requires a clear sense of what you want to achieve through your initiative and for the initiative to succeed by the lights of the success criteria that you initially proposed. To be totally honest, you will probably need to create a lot of this infrastructure yourself because nobody is going to create it for you. But that’s part and parcel of being in a new field where the path is not well-trodden.

6.2. Collaboration

One trap that people can fall into when trying to get promoted is focusing too much on their own individual impact. Here’s three ways in which overindexing on your own impact is harmful. First, to be promoted you need to be well-liked, as nobody wants to promote an asshole. A tendency to focus on your own impact rather than uplifting others is a good way to become an asshole. Second, being promoted requires having outsized impact, and having outsized impact often requires leveraging others in the service of your goals. There’s only so much that one person can do individually, and at L5 and beyond the kind of impact that you need to have in order to get promoted is rarely the kind of impact that can be achieved by a solo-act. Third, and relatedly, at L5 and beyond, mentoring and empowering others is part of the job description, so you can’t move forward unless you’re actively and demonstrably helping others to advance their careers.

Effective collaboration is extremely hard to pull off. On one hand, loads of people have big ideas about what the team could do or should do. Ideas are cheap as chips. Getting people onboard requires having ideas that are better than the alternatives on offer. Such ideas show foresight and an awareness of the evolving technological and sociotechnical landscape. Such ideas also give others an opportunity to share in the success of the idea. Promotion is rarely zero-sum. Many people can grow their careers by investing in the same big idea, and the thing to guard against is people exploiting your ideas for their own gain. You need to have visibility as the originator and driver of projects even if your project benefits the careers of many people.

On the other hand, tech companies are full of high-agency people who are often not very good at collaborating. Hence people management is required even if you are not directly involved in managing people. My advice here is that projects should have a small leadership team of one to three leaders who essentially have full control over the project and who work well together. Then it’s possible to grow the project like an onion. You could, for example, recruit ten to twenty core contributors, and beyond that recruit a network of twenty to fifty partial contributors. The onion method allows you to lead fairly large efforts which veritably involve large numbers of people, but nevertheless allows enough control to avoid descending into a disorderly pit of despair.

Because leading collaborations is hard, it’s best to start out co-leading collaborations under the mentorship of a person who is more experienced at leading collaborations. Study what kinds of problems come up and how those problems are dealt with. For example, a common problem is that core contributors fail to complete tasks on time or to the required specification. Watch how experienced leaders navigate such situations, repurposing resources from other parts of the project or modularising the task and imposing additional structure to help the person who is struggling. You can get a lot of credit for co-leading something even as the junior lead, but the main advantage is acquiring the skillset to become the principal lead of your own project.

One big question is whether it’s best to first get involved in a project as a core contributor before attempting to co-lead a project. Maybe. If you’re starting out in a tech company, being a core contributor to a project has several advantages. It’s an opportunity to demonstrate competence, build trust with others and show that you can take on more scope. It also gives you a sense of how projects operate in general, and to that extent, it’s probably good to start out as a core contributor on at least one project. But be warned: credit is power law distributed. I don’t think it’s an exaggeration to say that for large projects ~90% of the credit goes to ~10% of the people. So merely being involved in a project is unlikely to give serious leverage for a promo packet. The perception of core contributors is typically, ‘oh nice, you contributed to so-and-so’s project.’

One final point is that you should try to avoid getting involved in ineffective collaborations. The best way to do this is to involve yourself only in collaborations that are led by people who have a demonstrated track-record of delivering impact. It’s also possible for a collaboration to start out effective and then become ineffective; for example, due to poor management or bad execution on the part of core contributors. In cases like these it’s fine to pull out. There’s no point flogging a dead horse.

6.3. Sponsors

Getting promoted requires concrete evidence of outsized (i.e. next level) impact. You need to do big and impressive things beyond the scope of your current level that can be documented and measured so that it is hard for anyone to deny that you’re underlevelled. But to do big stuff you need sponsors to back projects and own any risks associated with them. And to be clear, all projects worth doing in terms of potential impact have substantial risks attached to them. The skill to master is selling the upside of projects to senior sponsors who can own the downside.

Starting out in a tech company as an L3 or L4, you will likely have a manager who is L5-L7. The manager might in turn report to a Director who in turn reports to a VP. Your manager and your manager’s manager are obviously the first sponsors to cultivate. But even at L3 and L4, I think it makes sense to cultivate sponsors all the way up to the VP level. The more advocates you have and the stronger their advocacy, the faster you can create promotion-conducive impact. The ideal person is someone who feels empowered to green-light cool stuff without permission.

There’s essentially a cyclical pattern that you need to adopt to grow the trust that people have in you and thus grow the degree of sponsorship that they are willing to give to you: (1) Propose something well-thought-out and cool that’s a bit beyond the assumed scope of your role. (2) Execute it brilliantly and in a way that exceeds people’s wildest expectations. (3) Rinse and repeat. Following these three steps will increase the amount of responsibility that people are happy to give you and will increase the degree to which they are willing to sponsor you in future. The visibility to higher-level potential sponsors (e.g. VPs) will follow naturally from this dynamic.

As you progress, you will need to make bigger asks from your sponsors. Hence it matters that you have a solid track record of following through on great ideas. Ultimately, you need to keep delivering value to the sponsors, and they will in turn give you more scope to create additional value for them. Promotion happens as a byproduct of this dynamic. Eventually something you do will be sufficiently big that it becomes undeniable that you are underlevelled and sponsors will want to create more space for you to do bigger and better things. Promoting you is how they give you this increased scope. Getting promoted isn’t about doing your job well, but expanding the job to the point where people can’t help but to give you a new one with greater responsibility.

Two further points. First, as your status grows, it becomes necessary to acquire sponsors from outside your reporting chain. This includes people within the company who can champion the work, people who have the power to block the work, and people who have the power to scale the work. Second, and relatedly, sponsorships need to be reinforced. Well-intentioned sponsors can be overruled or placed in situations where advocating for you is unacceptably costly, and that’s something that needs to be budgeted in. The broader your advocacy base the better.

7. Mental Health

My qualitative observation is that Responsible AI folks are not doing well as a collective. There is a lot of anxiety about the future which is quite disheartening. To make matters worse, being a Responsible AI person often feels like it’s antithetical to the move-fast-and-break-things ethos of Silicon Valley which gives you the distinctive sense that you’re someone who’s in the way and not someone who’s contributing to the collective mission of the industry. Worse still, there’s a lack of structure in the field at present which means that everyone is beating-out their own paths. The unstructured environment makes it hard to know whether you’re moving in an optimal direction or whether you’re making the problems that you’re trying to solve worse.

I think it’s healthy to admit that the rate of technological change is nuts. The risk landscape is overwhelmingly uncertain and the impact of well-intended actions is hard to assess. In these circumstances it’s completely normal to feel lost, confused, overwhelmed, powerless, angry and exhausted. It’s therefore important that we, as a field, connect with one another, support each other and, to the extent that we’re comfortable, share our vulnerabilities. It’s also important to understand that work is not everything. While career success requires periods of intense work and sacrifice it’s not sustainable to plough full-steam-ahead for months or years at a time. That is a recipe for burnout. It’s not good for individuals and it’s not good for the field as a whole.

I’ve done a few Responsible AI careers panels alongside people who praise hustle culture. The basic premise of hustle culture is that individual agency explains the lion’s share of the variance in success. Hence what’s needed to be successful is to work relentlessly hard and in particular to outwork the competition. This picture is a drastic oversimplification. To be sure, working hard is an important ingredient in success. But the truth is that circumstantial factors play a massive role. Landing a bad PhD supervisor can set back a research career by years or kill it altogether. The right mentorship at the right time can build a star researcher and the absence of mentors can kill otherwise promising researchers. What school you went to can drastically impact the amount of confidence you have in your own abilities and the degree to which you are able to fully exploit the opportunities made available to you at university and in the workplace. Where you are born and how much money your parents have can radically shift the cost-benefit ratio of going to grad school. Hustle culture downplays factors like these while encouraging people to attribute the delta between where they are and where they want to be to a lack of hard work.

My own mental model is one on which everyone is climbing a different gradient. You can always climb higher by pushing harder. But where you end up is only partly explained by hard work, and so it’s not helpful to draw comparisons in outcomes across people. People also have different aims. While some people aim to climb as high as possible up their career gradient, others may have different aims and having multiple priorities within a lifetime is nothing to be ashamed of. Furthermore, to the extent that climbing the career gradient is what matters, sustainability is more important than speed over the long-run. Hence we should not valourise the grind.

The flipside to not overestimating the importance of agency in realising our career outcomes is that we should also be cautious not to underestimate how impactful agency can be. A little bit of self-belief goes a long way. Many people are afraid of looking stupid and so don’t do things. This is a bad strategy because in the end nothing gets achieved. Some of the best advice I can give is to have a crack at something even if you don’t feel qualified. In the worst-case scenario, you fail and people laugh at you. And so what? What are they doing? People who actually achieve things are not spending time laughing at other people’s failures. There’s no point basing your sense of self-worth on the opinions of losers.