User talk:Veritas Aeterna

Earlier, Introductory Questions
Hi, helpful friends. Here are my questions so far.

1. I want to show the book cover for The Pinochet File at

http://www.amazon.com/The-Pinochet-File-Declassified-Accountability/dp/B000FVQV48/ref=ntt_at_ep_dpt_1

I notice that a book cover has been used, e.g., for 'The Discoverers'. Can I do that here? If so do I (1) get it from Amazon, (2) take a photo myself of the book, or what is the procedure? I can change high res to low res.

2. Any other suggestions for the article I added at on the book 'The Pinochet File: A Declassified Dossier on Atrocity and Accountability' ?

3. Am I supposed to put something on the talk page of an article every time I edit it, or only to address disputes?

Thanks.


 * Go to the Amazon page (here), click "See larger image" then right click the image and save it to your computer. Then go to Upload and follow the steps to upload the picture to Wikipedia (it is a fair use image).
 * The article looks ok at a glance. You could request more feedback from Help desk
 * You only need to use the talk page for controversial edits. Otherwise, just be bold.
 * --Commander Keane (talk) 04:55, 2 April 2012 (UTC)

Disambiguation link notification for April 4
Hi. When you recently edited The Pinochet File, you added links pointing to the disambiguation pages Department of Defense, Treasury Department and DINA (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ* Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 15:26, 4 April 2012 (UTC)

Richard Nixon talk page notice
I have added a section on the talk page for the article Richard Nixon titled "Section deleted on 13 December 2012." Please share your thoughts on the talk page. Thanks. Mitchumch (talk) 17:28, 16 December 2012 (UTC)

MfD nomination of User:Veritas Aeterna/Draft Kissinger
User:Veritas Aeterna/Draft Kissinger, a page you substantially contributed to, has been nominated for deletion. Your opinions on the matter are welcome; please participate in the discussion by adding your comments at Wikipedia:Miscellany for deletion/User:Veritas Aeterna/Draft Kissinger and please be sure to sign your comments with four tildes ( ~ ). You are free to edit the content of User:Veritas Aeterna/Draft Kissinger during the discussion but should not remove the miscellany for deletion template from the top of the page; such a removal will not end the deletion discussion. Thank you. Ricky81682 (talk) 19:26, 10 October 2015 (UTC) Veritas Aeterna (talk) 19:31, 11 October 2015 (UTC) It's OK to delete, thank you for asking.

MfD nomination of User:Veritas Aeterna/Draft Nixon
User:Veritas Aeterna/Draft Nixon, a page you substantially contributed to, has been nominated for deletion. Your opinions on the matter are welcome; please participate in the discussion by adding your comments at Wikipedia:Miscellany for deletion/User:Veritas Aeterna/Draft Nixon and please be sure to sign your comments with four tildes ( ~ ). You are free to edit the content of User:Veritas Aeterna/Draft Nixon during the discussion but should not remove the miscellany for deletion template from the top of the page; such a removal will not end the deletion discussion. Thank you. Ricky81682 (talk) 19:26, 10 October 2015 (UTC) Veritas Aeterna (talk) 19:31, 11 October 2015 (UTC) It's OK to delete, thank you for asking.

Changes on the Symbolic AI Article
Hi, Charles, thanks for your suggestions. Daniel Kahneman's ideas are quite wide-spread now in the industry and I've seen his ideas presented countless times now, that the two approaches are complementary. It provides a very useful way for looking at both approaches, where we don't have to say there is a single correct only way to proceed with AI.

I lived through all this, working in AI industry after grad school then. I got my PhD in CS in the mid-80s. To say that time was the twilight of symbolic AI is only correct if you look at success as measured by commercial funding and media coverage. Yes, AI receded from the media limelight and the LISP-based hardware companies went under. But work in symbolic AI research continued in universities, and to a lesser extent, in industry, although often under other guises.

I think overall, the approach to AI history I'm advocating here is consistent with both Henry Kautz [] and Russell & Norvig. Both express the view that after the second AI winter there was a period of time where the field went back to addressing problems with handling uncertainty and then began incorporating Bayesian and more statistical approaches. However, there was no sudden burst of sub-symbolic research, instead the work was more on Bayesian approaches to expert systems and new approaches to machine learning such as inductive logic programming, decision trees, symbolic machine learning, and probabilistic logic approaches such as statistical relational learning (e.g., Markov Logic Networks). I'm not saying there was no work in neural networks, just that it was not the primary focus on the field.

To imply that the field instead turned to sub-symbolic methods at the time implies that areas such as neural networks and deep learning became predominant at that time, which is not the case. Instead, the explosion of deep learning is widely dated to around 2012, when one of Hinton's deep-learning based neural networks, AlexNet roundly beat all competitors on an ImageNet benchmark. E.g., in Russell and Norvig, section 1.3.8 dates it as 2011- present, and Kautz dates it as (20[26] - ?).

Let me address some problems in the second paragraph as it reads now.

1. The first sentence is fine.

2. The second, "Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field." is overly broad. Perhaps if we said "ultimate goal" that would be more precise, since many other researchers, especially those in KBS (knowledge-based systems), were pursuing more limited models of emulating human skill, dialogue, and thinking. Obviously, expert systems were not intended to be all-intelligent, just performant in one area. And, yet others, like John Anderson, were building cognitive architectures simulating human performance.

3. The third, "However, in the late 80s and 90s specific technical problems (such as brittleness and intractability) showed the limits of the symbolic approach.", I would rewrite to: "However, in the late 80s and 90s, specific technical problems, such as brittleness, difficulties handling uncertainty, and problems with acquiring knowledge from subject matter experts and maintaining the large knowledge bases that resulted, showed the limits of the symbolic approach at that time."

So, basically, not saying symbolic AI is dead and buried, just that it had to pause and address problems.

4. The fourth sentence, "AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks.[6]" I think is incorrect. Symbolic AI was not abandoned for sub-symbolic AI. There was research in these areas before the second AI Winter, including genetic algorithms and neural networks. Danny Hillis's work on connectionism was different from most neural network work now. If I recall correctly, it focused on spreading activation and message passing, not on back propagation. And those of us in AI didn't say we were doing sub-symbolic work.

Certainly, there was a massive shift around 2012 on and then it seemed as if symbolic AI had all but disappeared, and those in the deep learning camp presented it as if it were dead and buried and had never made any useful contributions. Also, as Kautz points out, "Overcoming the knowledge acquisition bottleneck led the field of AI to a renewed focus on machine learning. For most of the second winter, however, few researchers returned to the roots of machine learning in artificial neural networks." which contradicts what the fourth sentence seems to say happened.

Thanks for mentioning the talks, I need to add them to the citations, they are really important and more accessible than the papers.

I went back to her talk. In the context of Bart Selman's talk, which occurred just a day or two earlier, which she refers to, and given the title, "Thinking Fast and Slow", it is clear that they believe this new trend has begun, not just that it might. See also her slide 10, and these spoken words, quoting Kautz: "...there is a violent agreement on the need to bring together neural and symbolic traditions...". Further context: She is at IBM, they are working on neuro-symbolic systems, and she presents an example of neurosymbolic research from her work later on.

I'd also recommend [|Henry Kautz's talk] and his coverage. For the future of AI, starting at 29:01, Kautz says, "We essentially have violent agreement on the need to bring together the neural and symbolic traditions.", but there is disagreement on how to do this. He proposes a taxonomy of six kinds of neuro-symbolic systems.

Going back to the Second AI Winter, (about 16:42) he cites the problem of expert system maintenance foremost. The collapse of AI workstations was more due to the availability of equivalent performance for LISP and Prolog on alternative, standard workstations. He also shows how the collapse was an impetus to other successful work: "I would argue that it's kind of the drive to model expert knowledge combined with the shortcomings of knowledge engineering that really led to knowledge induction or modern machine learning in expert domains: so, decision tree learning, inductive logic programming, and decision theoretic expert systems, and other such work." (about 16:22-16:42) There is no mention of subsymbolic systems such as deep learning until 2012.

Other Notes ---

A brief digression, Rossi also presents an alternative overview of AI history on slide 9 that might also work in the introductory part of the Symbolic AI article, although it is less detailed (just one slide) of course.

For [|Bart Selman's talk] (start about 1:45:00 in!), you'll see he also dates the Deep Learning revolution at 2012. His main theme is a reunification of subfields such as vision, NLP, planning, etc. and that we can "use output from a perceptual system and leverage a broad range of existing AI techniques" (slide 95) that we could not before. The parts where he addresses combinations of symbolic and neural reasoning start at slide 114 (1:58:17), although he casts this more as combining knowledge-driven and data-driven approaches. He emphasizes that "scientific knowledge has an explanatory, causal component. It's cumulative" (about 2:01:00), unlike data. He says "Concept discovery is central to scientific discovery." (2:03:22). He also talks about systems that integrate reasoning and learning, but his focus is a but more on the reunification of subfields.

Thanks, Charles. Thanks for not just reverting my edits. Feel free to write on my user page. For now, I suggest we just talk. I'll just add the references I mention here.

I'd like to expand the section on neurosymbolic systems and bring in material from [|Neurosymbolic AI: The 3rd Wave]. For example, just in the abstract: "Neural-symbolic computing has been an active area of research for many years seeking to bring together robust learning in neural networks with reasoning and explainability via symbolic representations for network models. ... The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI." shows there is much more work in this area than most people know about.

Later, I'd also like to expand the section on symbolic machine learning, which it seems to be largely overlooked now. Instead, you almost get the impression that there was no machine learning done until neural networks, which is not true.

This week is fairly busy so I may not be able to get to either until later this week or early next week.

I wanted to put all the arguments against Symbolic AI under Controversies. I'm not sure I'd consider mathematical optimization or statistical classifiers as subsymbolic AI, but rather tools that can be used for either kind of AI. E.g., Dan Roth uses ILP (integer linear programming) for coreference resolution and I've seen optimization used in abductive reasoning. For statistical classifiers, certainly decision trees are symbolic, but I'd agree random forests are more arguable, harder to interpret. And an SVM also more cryptic.

If you wanted to expand the arguments against symbolic AI there, from the standpoint of sub-symbolic AI, you could add text there.

00:00, 26 July 2022 (UTC)

More on Symbolic AI
Hi, Charles. Actually, I was across the Bay at your frenemy school, got my MSEE there while taking all the AI courses I could, including from Feigenbaum, McCarthy, and Winograd. Grad school and PhD in AI was UT-Austin, where I was in between the neats and the scruffies. I worked in the field from the early 1980s to the present, have one book published in the field, some book chapters and journal articles, a few patents, and altogether some 40+ refereed citations in AI conferences or workshops—including AAAI and IJCAI. I have worked on research contracts for DARPA, ONR, IARPA, ARI, along with many corporate research projects. I didn't have any of this on my user profile, but thought I should add some of it now, as it seems more relevant. From 1987-2005, I was in various AI groups including FMC's AI group, Stanford Knowledge Systems Laboratories, and for the last nine years of that time span at Teknowledge. I knew Tom Kehler from Texas Instruments' AI group. I'm still in AI. I like Counting Crows, too.

I think we should not portray all of AI as monolithic, where first there was only symbolic AI, then at some point everyone switched gears and now there is only subsymbolic AI. Instead, there have always been subgroups—multiple strands—with competing theories and overlapping histories. E.g., Minsky's early work was on neural nets and backpropagation appears to have been invented multiple times in the 60s, then popularized by Hinton in 1986. So, even at the start and through the heights of Symbolic AI, it wasn't all one or the other.

We also need to distinguish between:
 * Media perception
 * Government and commercial funding
 * Research community perceptions

So, in both the AI Winters that Kautz mentions both symbolic AI and neural net research continued, but to lesser extents. And after deep learning exploded circa 2012, symbolic AI still continued. And, over the past twenty years there has also been a thread of researchers looking at neurosymbolic AI.

And to your point: ...people in the business world use the term "AI" as synonymous with "machine learning with neural networks". Symbolic AI is invisible in the wider world. Yes, I would agree that much of the business world treats AI as the same as deep learning and symbolic AI is invisible to the wider public. But, we also want to paint an accurate picture of the state of the field, including where leaders of the field see the research going.

I know Hinton is certainly biased against symbolic AI. I was at a AAAI conference where he was invited to speak. When asked how those who viewed symbols as necessary to reasoning—or a similar question, I can't remember the exact phrasing—he said, bluntly, they should "Just get over it." Gary Marcus has also pointed that there is a significant bias in the deep learning community against the use of symbols or attempts to incorporate knowledge.

So, the misconceptions I want this article to address, by showing these are not the case, are:
 * 1) Symbolic AI died in the second AI winter.
 * 2) Symbolic AI only relies on rule-based methods.
 * 3) Symbolic AI made no contributions worth mentioning.
 * 4) Subsymbolic AI fixes the problems of symbolic AI, and has no problems itself.
 * 5) There are no examples of symbolic machine learning, instead machine learning was invented later.
 * 6) There are no examples of hybrid neuro-symbolic systems.

Some examples of neuro-symbolic systems include:


 * AlphaGO
 * Rossi's work
 * Research by Garcez and Lamb

There is more. Marcus also points out Google's search uses both its knowledge graph and a large language model as a sample hybrid system, even though it is not considered an AI system. I can start writing the neurosymbolic section to address all this in a better way. I agree that it has not happened "at the level of these other approaches", but it is happening, there are good examples, and Kautz even has a taxonomy of the various approaches so far.

After that, I plan to add a discussion and examples of symbolic machine learning for the period following the AI winter.

Basically, I was just about half-way done with the article when we started talking. So, the section on the First AI Winter I hadn't started. I think we can address the concern about intractability there. Also, I have even started the section on techniques.

For now, I added the section below, using Kautz's language, see if it addresses your needs.

The first AI winter was a shock: "During the first AI summer, many people thought that machine intelligence could be achieved in just a few years. The Defense Advance Research Projects Agency (DARPA) launched programs to support AI research with the goal of using AI to solve problems of national security; in particular, to automate the translation of Russian to English for intelligence operations and to create autonomous tanks for the battlefield. Researchers had begun to realize that achieving AI was going to be much harder than was supposed a decade earlier, but a combination of hubris and disingenuousness led many university and think-tank researchers to accept funding with promises of deliverables that they should have known they could not fulfill. By the mid-1960s neither useful natural language translation systems nor autonomous tanks had been created, and a dramatic backlash set in. New DARPA leadership canceled existing AI funding programs.

...

Outside of the United States, the most fertile ground for AI research was the United Kingdom. The AI winter in the United Kingdom was spurred on not so much by disappointed military leaders as by rival academics who viewed AI researchers as charlatans and a drain on research funding. A professor of applied mathematics, Sir James Lighthill, was commissioned by Parliament to evaluate the state of AI research in the nation. The report stated that all of the problems being worked on in AI would be better handled by researchers from other disciplines — such as applied mathematics. The report also claimed that AI successes on toy problems could never scale to real-world applications due to combinatorial explosion."

Disambiguation link notification for September 4
Hi. Thank you for your recent edits. An automated process has detected that when you recently edited Symbolic artificial intelligence, you added a link pointing to the disambiguation page BB1. Such links are usually incorrect, since a disambiguation page is merely a list of unrelated topics with similar titles. (Read the FAQ* Join us at the DPL WikiProject.)

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 09:19, 4 September 2022 (UTC)

Edits without talk page discussion
I saw your edit summary at this edit. You're wrong about that. You were bold when you made that content. The administrator deleted much of it, with clear edit summaries. Now, per WP:BRD, it's your turn to use the talk page to justify why that content should be kept. The ideal result will be a revision of the content so it complies with our policies and guidelines and a compromise, collaborative, version is installed. I am not surprised this happened. Creating articles is not easy, and they are rarely accepted without any changes. In fact, the first few articles created by new editors are usually deleted completely. -- Valjean (talk) ( PING me ) 02:52, 21 September 2023 (UTC)


 * Hi, Valjean,
 * Thanks for your comment and suggestions. Perhaps I am confused about protocols and such. I thought it was up to the person making the changes to discuss them and justify. Nor did I see where he was an admin or understand how that matters--I understand they have special powers, but if this is a special admin who specially reviews new pages and thus his edits are given more weight, I did not know such a procedure happened--I've never had an admin edit any of my pages before, so this is a first! Most of the comments were very short (e.g., (cut) ).
 * I had created new pages, e.g., on neuro-symbolic AI, but I never knew new pages were reviewed automatically or the reviewer had admin powers and thus did not need to justify their edits on the Talk Page, especially when they were major edits.
 * Anyway, I appreciate your explanation, Wikipedia is certainly a lot more complicated than it appears...
 * Veritas Aeterna (talk) 03:41, 21 September 2023 (UTC)
 * I only mentioned that editor was an admin as an FYI, nothing more. Nor is there an automatic review of new articles. -- Valjean (talk) ( PING me ) 04:12, 21 September 2023 (UTC)

re: your email
Hi there,

Thanks for your email. While I normally would never want to disclose any details about private communications onwiki, it appears that something about your inbox setup has prevented my reply from reaching you. If you'd like, I can copy and paste my response here. Thanks :) theleekycauldron (talk • she/her) 07:48, 30 September 2023 (UTC)


 * Hi, leek,
 * Thanks, but there is no need, I got your email OK. The problem is a spam blocker from new users, but I got what you sent and have added you to my contact list, so need to add here. Thanks for all the good advice! I've already implemented much of it.
 * Veritas Aeterna (talk) 19:14, 1 October 2023 (UTC)

I have sent you a note about a page you started
Hello, Veritas Aeterna. Thank you for your work on Animal Farm Foundation. User:Scope creep, while examining this page as a part of our page curation process, had the following comments:

To reply, leave a comment here and begin it with. Please remember to sign your reply with ~. (Message delivered via the Page Curation tool, on behalf of the reviewer.)

 scope_creep Talk  09:30, 26 November 2023 (UTC)

Hi, thanks for your review and recommendations. I think the problem is that the section "Further Reading" should have been called "References", as it contains the source documents for the citations. I have changed that. If you still see any problems, please let me know! Thanks! Veritas Aeterna (talk) 21:22, 28 November 2023 (UTC)
 * You still need a further reading section as some of these references need to be in it. For example "Case judgement that allows Council Bluffs, Iowa Pit Bull Ban to Stand" by Danker. It is not referenced at all from the article. So the whole section "BSL studies". Its not referenced at all. There is script that shows harv errors and would be useful to you.  Secondly, the formatting of putting the article name next to the citations is completly non standard and will need to be removed. I've put the citations underneath the a References section and name the references section into a bibliography section which is an accurate name, but your annotated bibliosection now has the heading all in the wrong. I would increase the section size of these. When you put the unused cited into the further reading section, make sure you add ref=none onto them so they are not ref'd.    scope_creep Talk  22:04, 28 November 2023 (UTC)
 * Hi, sorry about that, I think we were actually editing at the very same time. I had removed some parts of the La Presse article that were not referenced. You are right about the BSL references and I can put them into Further Reading.
 * The script that shows the harv errors would be very useful. I have tried adding the following to my CSS page:
 * .mw-parser-output .cs1-maint {display: inline;} /* display Citation Style 1 maintenance messages */
 * .mw-parser-output .cs1-hidden-error {display: inline;} /* display hidden Citation Style 1 error messages */
 * but I may have the CSS wrong, anyway it is not showing the messages you must be seeing. How do I do that? What is the script you have and where do you put it?
 * Thanks. Let me do some more fixes and give it to you then for another round...I'll let you know when I am done. Veritas Aeterna (talk) 23:07, 28 November 2023 (UTC)
 * I would remove press-releases. They are junk and need to go.   scope_creep Talk  22:07, 28 November 2023 (UTC)
 * Agreed that they don't support objective facts. Here I was using them to show AFF's advocacy through PR, but I can remove them as the other paragraphs make stronger arguments than these two PR releases. Veritas Aeterna (talk) 23:13, 28 November 2023 (UTC)
 * I've removed some of these article name which are non standard. Can you do the rest.   scope_creep Talk  22:11, 28 November 2023 (UTC)
 * Yes, let me have another go, and I will tell you when it's ready for review--either tonight or tomorrow. Tnx. Veritas Aeterna (talk) 23:15, 28 November 2023 (UTC)
 * Ah...I think I found the script code you are referring to, after a Google search, and added it to my common.js :
 * importScript('User:Trappist the monk/HarvErrors.js'); // Backlink: User:Trappist the monk/HarvErrors.js 
 * Is that what you are using? It is quite useful. Thanks for letting me know about it.
 * See what you think now. Hopefully, I've gotten all the Harv errors now, at least all I could see with that script. Veritas Aeterna (talk) 00:53, 29 November 2023 (UTC)

ArbCom 2023 Elections voter message
 Hello! Voting in the 2023 Arbitration Committee elections is now open until 23:59 (UTC) on. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2023 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add to your user talk page. MediaWiki message delivery (talk) 00:28, 28 November 2023 (UTC)