Template talk:Existential risk from artificial intelligence

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Futures studies (Rated Template-class)
WikiProject iconThis template is within the scope of WikiProject Futures studies, a collaborative effort to improve the coverage of Futures studies on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 Template  This template does not require a rating on the quality scale.
 
WikiProject Disaster management (Rated Template-class)
WikiProject iconThis template is within the scope of WikiProject Disaster management, a collaborative effort to improve the coverage of Disaster management on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 Template  This template does not require a rating on the project's quality scale.
 
WikiProject Computer science (Rated Template-class)
WikiProject iconThis template is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 Template  This template does not require a rating on the project's quality scale.
 

Removed group "People"[edit]

This edit by User:Silence removed the group "People" from the template with the rationale of it being "more subjective". I'm not sure if it's a good idea to remove them so let's please have a discussion about whether or not to keep them here.


So for instance what about finding some inclusion criteria that makes the list less subjective - e.g. nobody is allowed to be added to it if he's just interested in the topic or published some texts on the topic but only if he's been called a "researcher" or alike in that domain by a reputable source or has written a book or at least a noteworthy academic paper about the topic... --Fixuture (talk) 22:53, 2 March 2016 (UTC)

  • My thinking is that 'existential risk' is such a huge topic, covering so many disciplines and such a wide range of levels of notability — we could come up with dozens of people to put on such a list just looking at people worried about nuclear armageddon over the 20th century — that it's going to be a huge hassle to decide which people to include or not include. If they're noteworthy enough, they should be featured prominently in at least one of the 'organization' or 'concept' articles, so I don't think leaving them out of the infobox will severely limit the ability of people to find articles about important individuals in this space. -Silence (talk) 02:58, 3 March 2016 (UTC)
@Silence: That's a good point. However one could set the bar of the inclusion criteria further up such as by requiring them to be called an expert or researcher in that area by a reputable source or having published an academic text on it (being worried about nuclear Armageddon wouldn't be sufficient). Also it might be a misconception that there's that many people who have Wikipedia articles who are researching existential risks - even if one would think so because of the topic's importance. I won't readd the list...but maybe at some point somebody else comes up with a good way to handle this. It seems like you're dealing with this template as if it were {{Existential risk}} and not {{Existential risk from artificial intelligence}}. There aren't that many people researching existential risks from artificial intelligence. I don't think that it'll be too many people when the inclusion criteria is set to people active in that area / researching it instead of just being interested / concerned about it (such as Bill Gates). Actually the list as it was probably didn't miss more than maybe 3 or so which. Not including them doesn't limit people's ability to find articles about important individuals in this space but makes it harder and less intuitive. As of right now I'll probably readd it. --Fixuture (talk) 12:53, 25 March 2016 (UTC)
Sorry, the comment above was rushed and I did forget which template I was commenting on. :) My earlier edits assumed this was AI-specific, though. I think the best way to decide which groups to include is to defer to reputable sources like GiveWell, which list the other orgs but not OpenAI or GCRI.
My suggestion would be to remove 'other' because it's unhelpfully general -- if a large portion of your categorization scheme is miscellanea, it suggests the categorization scheme is lacking. 'Concepts' is already a super general category!
Subsections ("Controversies and dangers of artificial general intelligence", "Artificial intelligence as a global catastrophic risk") shouldn't be linked, either. 'Texts' would be a fine category, except I don't think there's a large enough literature to justify this yet -- the current infobox only links to two books plus one open letter. -Silence (talk) 22:07, 25 March 2016 (UTC)
Well then GiveWell is missing some entries - other sources also list these. There aren't that many organizations concerned with this issue yet so we don't have to be over-restrictive.
I don't think a template-group being too general is a reason to remove if it doesn't completely clutter the template or anything alike: instead its entries should be moved to new, more appropriate groups. Also note that many templates have an "other" or "related" group. Anyways, the reason for them being not grouped probably is that there's relatively few articles left in other so if one would group these the template would grow much in height and have groups with just 2 entries or so (e.g. "Works" or "Texts").
Imo the two subsections should definitely stay linked as they're currently the main Wikipedia-entries on the topic of the template (there could be a group "Main" for them; 2 entries are probably too few for that though).
I just readded the people group as it's highly relevant and useful here.
--Fixuture (talk) 18:46, 19 April 2016 (UTC)