New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
40
William_S
17h
2
I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models. I then worked to refine an idea from Nick Cammarata into a method for using language model to generate explanations for features in language models. I was then promoted to managing a team of 4 people which worked on trying to understand language model features in context, leading to the release of an open source "transformer debugger" tool. I resigned from OpenAI on February 15, 2024.
Not sure how to post these two thoughts so I might as well combine them. In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire. However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head: * Startup-to-give as a high EV career path. Entrepreneurship is why we have OP and SFF! Perhaps also the importance of keeping as much equity as possible, although in the process one should not lie to investors or employees more than is standard. * Ambition and working really hard as success multipliers in entrepreneurship. * A career decision algorithm that includes doing a BOTEC and rejecting options that are 10x worse than others. * It is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating. [1] (But fraud is still bad, of course.) Just because SBF stole billions of dollars does not mean he has fewer virtuous personality traits than the average person. He hits at least as many multipliers than the average reader of this forum. But importantly, maximization is perilous; some particular qualities like integrity and good decision-making are absolutely essential, and if you lack them your impact could be multiplied by minus 20.     [1] The unregulated nature of crypto may have allowed the FTX fraud, but things like the zero-sum zero-NPV nature of many cryptoassets, or its negative climate impacts, seem unrelated. Many industries are about this bad for the world, like HFT or some kinds of social media. I do not think people who criticized FTX on these grounds score many points. However, perhaps it was (weak) evidence towards FTX being willing to do harm in general for a perceived greater good, which is maybe plausible especially if Ben Delo also did market manipulation or otherwise acted immorally. Also note that in the interview, SBF didn't claim his donations offset a negative direct impact; he said the impact was likely positive, which seems dubious.
50
tlevin
4d
3
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable. I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves and shrink the size of the ask for their ideal policy than of pushing for their ideal vision and then making concessions. Possibly an ideal ecosystem has both strategies, but it seems possible that at least some versions of "Overton Window-moving" strategies executed in practice have larger negative effects via associating their "side" with unreasonable-sounding ideas in the minds of very bandwidth-constrained policymakers, who strongly lean on signals of credibility and consensus when quickly evaluating policy options, than the positive effects of increasing the odds of ideal policy and improving the framing for non-ideal but pretty good policies. In theory, the Overton Window model is just a description of what ideas are taken seriously, so it can indeed accommodate backfire effects where you argue for an idea "outside the window" and this actually makes the window narrower. But I think the visual imagery of "windows" actually struggles to accommodate this -- when was the last time you tried to open a window and accidentally closed it instead? -- and as a result, people who rely on this model are more likely to underrate these kinds of consequences. Would be interested in empirical evidence on this question (ideally actual studies from psych, political science, sociology, econ, etc literatures, rather than specific case studies due to reference class tennis type issues).
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in. I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
Excerpt from the most recent update from the ALERT team:   Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious. Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).   their estimated 10 year risk is a lot higher than I would have anticipated.

Popular comments

Recent discussion

(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK? 

I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could...

Continue reading

For my org, I can imagine using this if it was 2x the size or more, but I can't really think of events I'd run that would be worth the effort to organise for 15 people.

(Maybe like 30% chance I'd use it within 2 years if had 30+ bedrooms, less than 10% chance at the actual size.)

Cool idea though!

I'm confused. Don't you already have a second building? Is that dedicated towards events or towards more guests?

^I'm going to be lazy and tag a few people: @Joey @KarolinaSarek @Ryan Kidd @Leilani Bellamy @Habryka @IrenaK Not expecting a response, but if you are interested, feel free to comment or DM.

2
0

I'm posting it now because it's a pity that it wasn't uploaded even though it was a video that gave me a lot of motivation for effective altruism.

Continue reading
Mjreard commented on My Lament to EA 14m ago
81
3

 I am dealing with repetitive strain injury and don’t foresee being able to really respond to any comments (I’m surprised with myself that I wrote all of this without twitching forearms lol!)

I’m a little hesitant to post this, but I thought I should be vulnerable. ...

Continue reading

I think your current outlook should be the default for people who engage on the forum or agree with the homepage of effectivealtruism.com. I’m glad you got there and that you feel (relatively) comfortable about it. I’m sorry that the process of getting there was so trying. It shouldn't be.

It sounds like the tryingness came from a social expectation to identify as capital ‘E’ capital ‘A’ upon finding resonance with the basic ideas and that identifying that way implied an obligation to support and defend every other EA person and project.

I wish EA weren’t a ... (read more)

3
titotal
4h
Thank you for writing this post, I know how stressful writing something like this can be and I hope you give yourself a break! I especially agree with your points about the lack of empathy. Empathy is the ability to understand and care about how other people are hurt or upset by your actions, no matter their background. This is an important part of moral reasoning, and is completely compatible with logical reasoning. One should not casually ignore harms in favour of utilitarian pursuits, that's how we got SBF (and like, stalinism). And if you do understand the harms, and realize that you have to do the action anyway, you should at least display to the harmed parties that you understand why they are upset.  The OP was willing to write up their experience and explain why they left, but I wonder how many more people are leaving, fed up, in silence, not wanting to risk any backlash? How many only went to a few meetings but never went further because they sensed a toxic atmosphere? The costs of this kind of atmosphere are often hidden from view. 
11
Charlotte Darnell
14h
Thanks for taking the time to write this and be vulnerable despite your concerns (and the RSI!).  I definitely resonate with some of what you’ve written, and share some of your frustrations. I might expand my thoughts here or via DM in future if you'd be interested, but in the meantime, I just wanted to say that I’m really sorry it’s been a tough time. I am glad to hear that you’ve had some good experiences along with the difficult ones (though this mixture of appreciation for and frustration with the EA community can be quite the emotional rollercoaster). I’m also glad you’re doing what feels right for you. Thank you for all the work and effort you’ve put in. I (and I know others too) have really enjoyed learning about your projects and I’m excited to see what you work on going forward, even if it's from a bit more of a distance.  It’s been lovely getting to know you a little and my metaphorical door is open if you’d like to chat sometime in future.  [If there are concerns you’d like to talk to the Community Health Team about, you can contact us here.] 
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
William_S posted a Quick Take 17h ago

I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models...

Continue reading

This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.

Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...

Continue reading

What role do you think journalism can play in advancing the cause of farmed animals? Can you think of any promising topics journalists may want to prioritize in the European context in particular, i.e. topics that have the potential to unlock important gains for farmed animals if seriously investigated and publicized?

5
ixex
12h
The year is 2100 and factory farming either does not exist anymore or is extremely rare. What happened?

About a week ago, Spencer Greenberg and I were debating what proportion of Effective Altruists believe enlightenment is real. Since he has a large audience on X, we thought a poll would be a good way to increase our confidence in our predictions

Before I share my commentary...

Continue reading

Hey mate! Would you be keen to discuss this over a zoom chat?

nathanhb commented on Why I'm doing PauseAI 7h ago

GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it’s hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict...

Continue reading

Cross posting from LessWrong:

I absolutely sympathize, and I agree that with the world view / information you have that advocating for a pause makes sense. I would get behind 'regulate AI' or 'regulate AGI', certainly. I think though that pausing is an incorrect strategy which would do more harm than good, so despite being aligned with you in being concerned about AGI dangers, I don't endorse that strategy.

Some part of me thinks this oughtn't matter, since there's approximately ~0% chance of the movement achieving that literal goal. The point is to build an... (read more)

2
yanni kyriacos
8h
Thanks for your comment Rudolf! I predict that my comment is going to be extremely downvoted but I'm writing it partly because I think it is true and partly because it points to a meta issue in EA: I think it is unrealistic to ask people to internalise the level of ambiguity you're proposing. This is how EA's turn themselves into mental pretzels of innaction.
2
Joseph Miller
16h
Yup. Is one of the main points of my post. If you support PauseAI today you may unleash a force which you cannot control tomorrow.

I was interviewed in yesterday’s 80,000 hours podcast: Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives. As I say in the podcast, there’s good evidence that this is a cost-effective way to save lives. Many peer-reviewed articles show...

Continue reading
5
deanspears
10h
Thanks so much for asking!  1. My sense is that expanding the program at this site (or keeping it alive and well for more years at this site) has increasing returns, because we spread the administrative costs over more babies. In fact, knowing we have the funding runway to keep the program healthy lets us hire higher-quality staff with multi-year commitments. But expanding to another district would have huge fixed costs, even if the marginal cost is identical once it is up and running. We would still have a lot of our learning-by-doing, and we would have the paperwork, software, and protocols that we have developed, but we would fundamentally need some new relationships and a new entrepreneurial leader to captain that ship. We don’t currently have that person, but there is no in-principle reason that they could not be found and hired someday. More broadly, running this program has caused us to realize that cost-effectiveness in EA, philanthropy, and development economics has not paid enough attention to what microeconomic theory knows about fixed, variable, marginal, and average costs all being different. It's on the to-do list to write about that someday. 2. There are three margins of behavior: The families, the government-salary doctors, and the nurses who are privately hired by the program. The families would counterfactually be bringing most babies home to poor odds. We believe the doctors are working harder and doing more, because the returns to their efforts are improved by the collaboration with the nurses and families. So what about the nurses? Government nurse jobs, for now, remain very hard to get (it's a bad equilibrium where there both are not enough nurses and not enough public facilities to hire them). So these nurses would likely work somewhere in the private sector. Who knows the tiny general equilibrium effect on statewide nurse wages (!) but the quality of healthcare and neonatal survival for babies born in private facilities is worse, on averag

Thanks so much for your response, that all makes sense!

You're understanding question 3 correctly - GiveWell's moral weights look like the following, which is fairly different from valuing every year of life equally. 

What is Manifest?

Manifest is a festival for predictions, markets, and mechanisms. It’ll feature serious talks, attendee-run workshops, fun side events, and so many of our favorite people!

Tickets: manifest.is/#tickets

WHEN: June 7-9, 2024, with LessOnline starting May 31 and Summer Camp starting June 3

WHERELighthaven, Berkeley, CA

WHO: Hundreds of folks, interested in forecasting, rationality, EA, economics, journalism, tech and more. If you’re reading this, you’re invited! Special guests include Nate Silver, Scott Alexander, Robin Hanson, Dwarkesh Patel, Cate Hall, and many more.

You can find more details about what we’re planning in our announcement post.

Ticket Deadline

We’re planning to increase standard tickets from $399 to $499 on Tuesday, May 13th so get your tickets ASAP!

Prices for tickets to LessOnline (May 31 - June 2) and Summer Camp (June 3 - June 7) will also...

Continue reading