Mission Challenges

I’m not sure if anyone feels frustrated about how the mission challenges would expect you to have an exact text/sentence as the solution text/sentence before your answers become validated. For instance, below is an output from a challenge I attempted. My code produced the expected outcome, however, the sentence used is different from mine, and the system wouldn’t pass me. My assumption is that learners understand the underlying concepts and can produce the same output as the solution transcript irrespective of how their sentences are constructed.
I am suggesting that the technical content developers at DQ look in this and answers be marked based technical output. This to me would help learners avoid plagiarising the solutions and focus on trying to solve the challenges.

– actual + expected

– The China’s population is 1,379.30 million
– The India’s population is 1,281.94 million
– The USA’s population is 326.63 million
– The Indonesia’s population is 260.58 million
– The Brazil’s population is 207.35 million

  • The population of China is 1,379.30 million
  • The population of India is 1,281.94 million
  • The population of USA is 326.63 million
  • The population of Indonesia is 260.58 million
  • The population of Brazil is 207.35 million

I can understand some of your frustration with this matter and how it seems like it shouldn’t matter whether or not you have the exact sentences needing to match. However, the first 5 sentences displayed: “The China’s population”, “The India’s population”, etc are improper English. Don’t get me wrong I understand and see a lot of people who aren’t born in countries where English is the main language. For those seeking work in the U.S. from different countries - it would be very beneficial to those people to have better English than someone else who is also applying to the same job. Lastly, at the end of the day it is very easy to change the sentence structure of your output to match the required answers.