In refactoring how data is retrieved and stored from GitHub, the current operation of fetching everything from GitHub then storing it all in the database is rather time consuming, and in some environments, could potentially cause timeouts or memory exhausted errors. Even on my system with a jacked up timeout setting and memory allotment, fetching the 500+ open PRs and storing to the database took close to two minutes and currently has zero user feedback; I had no idea if it was still running without refreshing the database table to see rows being added. This should be refactored to break it down into steps somewhat like the Smart Search indexer.
Still TODO: