Leveraging Differences in Entropy for Password Generating Models
- Resource Type
- Conference
- Authors
- Williamson, Michael; Wang, Taehyung
- Source
- 2022 Fourth International Conference on Transdisciplinary AI (TransAI) TRANSAI Transdisciplinary AI (TransAI), 2022 Fourth International Conference on. :59-62 Sep, 2022
- Subject
- Computing and Processing
Recurrent neural networks
Passwords
Markov processes
Logic gates
Entropy
Artificial intelligence
Password Entropy
Markov Chain
Recurrent Neural Network
Gated Recurrent Unit
Password Generation
- Language
In this paper, we explore the efficacy of password cracking models when trained on passwords belonging to a lower entropy dataset than the entropy of the target set. It compares a Markov Model against a Recurrent Neural Network Model for the percentage of the target set found via a generated set. The generated set is created from a dataset of lower entropy passwords which we expect shows that poorly repeated habits from weaker passwords can be learned and incorporated forward into the generated passwords. The idea behind this approach is that there could be potential speedup in cracking higher entropy passwords which use similar bad habits as low entropy passwords over traditional password cracking methods.