Automated writing evaluation (AWE) is a popular form of educational technology designed to supplement writing instruction and feedback, yet research on the effectiveness of AWE has observed mixed findings. The current study considered how students' perceptions of automated essay scoring and feedback influenced their writing performance, revising behaviors, and future intentions toward the technology. The manner in which the software was presented--claims about the accuracy and quality of the automated scoring and feedback--were modestly related to students' expectations and perceptions. However, students' direct experiences with the software were most strongly associated with their perceptions. Importantly, students' perceptions seemed to have minimal impact on their "in the moment" use of the software to write and revise successfully. Students revised and improved their essays regardless of their positive or negative views of the system. However, positive and negative perceptions significantly predicted future intentions to use the software again or to recommend the software to a friend. Implications for AWE design, implementation, and evaluation are discussed.