User:Themumblingprophet/sandbox

Exmaple Algorithm Formatting

 * Inputs Given a Network $$G = (V,E)$$ with flow capacity $c$, a source node $s$, and a sink node $t$
 * Output Compute a flow $f$ from $s$ to $t$ of maximum value
 * $$f(u,v) \leftarrow 0$$ for all edges $$(u,v)$$
 * While there is a path $p$ from $s$ to $t$ in $$G_f$$, such that $$c_f(u,v) > 0$$ for all edges $$(u,v) \in p$$:
 * Find $$c_f(p) = \min\{c_f(u,v) : (u,v) \in p\}$$
 * For each edge $$(u,v) \in p$$
 * $$f(u,v) \leftarrow f(u,v) + c_f(p)$$ (Send flow along the path)
 * $$f(v,u) \leftarrow f(v,u) - c_f(p)$$ (The flow might be "returned" later)

From Ford–Fulkerson algorithm found via Articles with example pseudocode

Here is a draft of the history section of the Long short-term memory page.

About Apple's Quicktype: The sourced articles are all secondary sources, tech news blogs. And both of them (three including the Wired article) said that apple would soon deploy LSTMs in quicktype. It had not yet. They were all reporting on a single talk by Apple's Vice President of software engineering, Craig Federighi in a 2016 Apple developers conference. In part of the talk he said that Apple would use LSTMs to improve in Siri and Quicktype. The blog post announcing the Apple's use of LSTMs in Quicktype came out in September of 2018. And the blog post about LSTMs in Siri came out in August of 2017. The paper about the model they used as published earlier in 2017. Also, Apple uses the LSTM for language identification (2019).

History
1997: LSTM was proposed by Sepp Hochreiter and Jürgen Schmidhuber. By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.

1999: Felix Gers and his advisor Jürgen Schmidhuber and Fred Cummins introduced the forget gate (also called “keep gate”) into LSTM architecture, enabling the LSTM to reset its own state.

2000: Gers & Schmidhuber & Cummins added peephole connections (connections from the cell to the gates) into the architecture. Additionally, the output activation function was omitted.

2009: An LSTM based model won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team lead by Alex Graves. One was the most accurate model in the competition and another was the fasted.

2013: LSTM networks were a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.

2014: Kyunghyun Cho et al. put forward a simplified variant called Gated recurrent unit (GRU).

2015: Google started using an LSTM for speech recognition on Google Voice. According to the official blog post, the new model cut transcription errors by 49%.

2016: Google started using an LSTM to suggest messages in the Allo conversation app. In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.

Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype in the iPhone and for Siri.

Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.

2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.

Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference. Their study describes a novel neural network that performs better on certain data sets than the widely used long short-term memory neural network.

Microsoft reported reaching 95.1% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".

2019: Researchers from the University of Waterloo proposed a related RNN architecture which represents continuous windows of time. It was derived using the Legendre polynomials and outperforms the LSTM on some memory-related benchmarks.

An LSTM model climbed to third place on the in Large Text Compression Benchmark.