It's an automated program. It's software. It's a bit of code.
If you don't see it used in enough contexts, if you aren't sensitive to subtle differences in meaning, you can auto-derive the wrong definition. Eventually that may be the common definition if the mistake is common enough and the error spreads widely, as generalizing errors are prone to do ('mergers spread at the expense of distinctions' is an old linguistic slogan, thanks, Bartoli!).
The 'logical' process used for figuring out what a new word means stops as soon as the word fits well enough, even if it's wrong. That process is called "abductive reasoning." It's not logic; it is, though, damned useful in producing hypotheses. The problem is when proof is sought for the hypothesis instead of disconfirmation; it's the scientific method flipped upside down and called 'truth seeking'. People should read C.S. Peirce more than they do, but he's too pragmatic for most.
The upshot is that when others point out there are bots on Twitter or Facebook, though, they are emphatically not talking about living, breathing people.
DU might have bots; that's unlikely, though, it's not a big or important enough concern. Congress can't; Twitter easily does.
And I know you'll be sure to point out that it might be the 'bot', as in 'botfly', recycled. But that's only at best a snarky after-the-fact allusion; it's shorted from 'robot', like 'droid' is short for 'android'.