Dear Shaun, Thank you for your submission to PeerJ Computer Science. I am writing to inform you that your manuscript "LSTM neural network for textual ngrams" has been rejected for publication. EDITOR COMMENTS (Fabrizio Sebastiani) ==================================================== I am the PeerJ Computer Science associate editor in charge of this paper. I have decided to reject this paper straightaway, without subjecting it to the usual peer review process, because it is very far from providing the minimal contribution that makes a paper worthy of full peer review. It squarely falls in the “waste of reviewers’ and editors’ time”; it is not clear whether the author has a clear idea of what a scientific paper should consist of. To say the truth, I am even uncertain what the paper is about; it is a fairly rambling short note (6 pages) about some experiments the author did. Tentatively (but I might be wrong: the paper is not clear about this), I can say that it is about using deep learning (specifically: LSTM neural networks) to predict the n-th word in a sequence, given the previous (n-1) words as input. The contribution that this paper makes to the related literature is practically null. As a whole, the paper lacks (a) a discussion of the task that it wants to attack and why this task is important, (b) a discussion of other methods that have been proposed in the past to solve this task, (c) a discussion of why the proposed method is better than them, (d) a discussion of whether the proposed method is novel or not, (e) a presentation of the results obtained in the experiments. The paper does not even discuss what its contribution is meant to be. In terms of technical soundness, the paper is bad. It contains several wrong statements (e.g., “Deep learning is a class of machine learning problems”: no, it is a class of techniques, not problems; “Ngrams are probabilistic models”: no, they are sequences of tokens, they are not models; “Ngrams are essential in creating language prediction models”: no, many language prediction models do not make use of ngrams; etc.) The quality of presentation is very bad. First of all, as I have said above, the author does not even make clear what the paper is about (the abstract does not clarify it, and the introduction does neither). The abstract is a sequence of short sentences where the connection between a sentence and the previous one (e.g., “They are designed to model the responses of neurons in the human brain. Learning can be supervised or unsupervised.“) is either absent or not apparent. The paper contains essentially no introduction. From line 2, it starts introducing mathematical notation, and in line 7 we find the first equation; there is absolutely no discussion of what the paper is about, what task it is trying to solve, why that task is important, what approach is being followed, why that approach is promising, etc. As a consequence, the paper falls way below the threshold for acceptance in this journal (and, should I say, in any scientific venue). Editorial decision: REJECT With kind regards, Fabrizio Sebastiani Academic Editor, PeerJ Computer Science # ARTICLE ID: 32804 ------------------------------------ Need help? Just reply to this email. ------------------------------------ Follow PeerJ on: - Twitter: https://twitter.com/thePeerJ - Facebook: https://www.facebook.com/thePeerJ - Google+: https://plus.google.com/+PeerJ - Blog: https://peerj.com/blog/ © 2018, PeerJ, Inc. PO Box 910224 San Diego, CA 92191, USA