Friday, March 22, 2013

IPS - Day 39

Today was the mid-term exam. I wrote about the general structure and philosophy of my mid-term exam structure in yesterday's post about discrete math.

The categories I created for this exam were:
  • Conditional probability and independence
  • Expected value
  • Probability
  • Sampling and bias
A student asked how long I expected the exam to take. I said I expected the exam to take 80 minutes. In tracking completions, four students finished this exam in less than one-hour and about one-third of the students completed the exam after 70 minutes. After 80 minutes there were still approximately one third of the students working on the exam. A couple of students worked the entire 90 minutes on the exam.

As with past exams, students commented on the length of time for the test but did not have any issues with the difficulty of the exam.

We're off on Spring Break until April 2. We'll begin making use of what we have learned so far through a series of investigations.

Visit the class summary for a student's perspective and to view the lesson slide.

Thursday, March 21, 2013

Discrete Math - Day 39

Today the class had its mid-term exam. The structure of the exam is based on brain research. Specifically, I drew ideas from How the Brain Learns Mathematics, by David A. Sousa.

The mid-term exam is broken into four categories: number theory and cryptography, counting problems, polygonal numbers and finite differences, and probability. Within each category there are four problems. Two of the problems are at a level that I would expect every student to be able to complete with a certain level of success. One problem is at a level that I would expect a moderately successful student to be able to complete. The fourth problem is at a level that I would expect a successful student to be able to complete.

In total there are eight problems at the first level, four problems at the second level, and four problems at the third level. To obtain a C-level grade on the exam, a student must complete six of these problems with a grade of partially correct (P) or essentially correct (E). To obtain a B-level grade, in addition to successfully completing six first level problems, a student must score P or E on two of the second level questions. To obtain an A-level grade, the student must meet the requirements for a B-level grade and complete on of the third level problems with a score of P or E. In addition, for a B or A, at least half of the problems must be completed with a score of E. To ensure that students do not avoid a particular class of problems, they are also required to successfully complete with a score of P or E one problem from each category.

Although this may sound confusing, it provides students choice in the problems they complete while ensuring that students demonstrate a specific level of knowledge and skill regarding the content. I use a color-coded spreadsheet to record problems attempted and problems completed successfully. An E score reflects that the problem is essentially correct, although there may be minor errors that result in an incorrect number solution. A P score reflects that the student was pursuing a productive approach to a problem solution but either was not able to complete their work or left out a component needed to answer the problem fully. An score of incomplete (I) reflects a solution that shows some inkling of what should be done but little indication that the attempt would actually lead to a solution. An I is also used whenever a solution records a numeric solution without explanation or justification. Finally, an X indicates that a student wrote something for an answer that shows no understanding whatsoever of the problem or its solution.

By using this spreadsheet, I can verify that the requirements for a specific grade level have been met. I can also use it to self-reference across students to ensure that scoring is consistent. Finally, I can see a summary of which problems were attempted, how many attempts were successful, and how many attempts were not successful. This allows me to evaluate the quality of the questions and the appropriateness of the questions as to their assignment to a difficulty level.

For example, last year I had a problem designated as first level, meaning I thought that the problem could be completed successfully by every student. As expected, a high percentage of the class attempted the question. The results showed that very few of the class completed the problem successfully. At the same time, a problem that I felt would be very difficult for most students was classified as a third level problem. In this situation, I had many more attempts for this problem almost all of which were successful. The spreadsheet allowed me to re-evaluate the assessment questions and make changes to the test that would better balance student capabilities and problem difficulty.

I do allow students to make use of their notes during the exam. My thinking is two-fold on this matter. First, the questions on the exam are not rehashes of problems worked on in class. The problems, for the most part, are designed to extend student understanding or to have students apply what they have learned in new ways. There is little that a student could pull directly from their notes and record as an answer to the questions on the exam.

The second part of my thinking reflects authenticity. When a professional mathematician is working through a problem, they have materials available to them for reference. It is rare that a mathematician would make a connection to something they learned a while ago and then say to themselves, "Oh well, I don't remember that math, so I guess I'll just skip the problem." No, they will reach over and pull a reference book or go online and skim through material that will help jog their memory of how the math they need works. I believe my students should have the same access as they work through unfamiliar problems.

Some students will complete the minimum number of problems while others will do additional problems to cover themselves. Today was a 90 minute class, and about a third of the class took 80-90 minutes to finish up their exams. Less than five students finished in 45-60 minutes.

Here are some additional observations about this test structure and administration. First, students do use their notes thoughtfully. You will observe students thumbing through their notes and taking time to actually read what they recorded, thinking about the examples and what they did to get to solutions.

Second, students do not complete the test linearly. They jump around. This has two impacts. First, I can leave students sitting in groups because they end up working on different problems and sections. The risk and amount of cheating with this test design is minimized. It is also easy to grade students who sat together to see if they completed all of the same problems with the same answers. My experience shows that cheating has not been an issue.

The fact that students jump around in answering questions and are writing their solutions on blank paper has changed my grading practice. With a traditional test, I would grade one page at a time, going through every student's work. Then I would proceed through the entire class, grading the second page. This would help ensure consistency of my grading. This process is not possible since the tests are not completed linearly. Now, I assess an entire test for one student and then move to the next student. I only record E's, P's, I's, and X's next to each response and don't worry about a numeric score at this point. Once the entire class is assessed, I record the results in my spreadsheet and then assign a numeric score. The spreadsheet becomes my mechanism for consistency.

Third, assessing with E's, P's, I's, and X's necessitates that I have a scoring rubric for each question. What does an E response contain? What mathematics should be demonstrated in the response? How much assistance and guidance would a students need to reach a correct solution? These questions help guide me as to what I should be looking for and considering as I assign my assessment to a problem. This does not mean I have an extensively written rubric for each question, just that I have a solid understanding of what I am looking for in a response to each problem. This helps provide consistency as I work through the exams.

Finally, I have less grumbling about a test being unfair. Students typically respond that the test was difficult or hard but that it was fair. This is important as students have a choice as to what they tackle. Because of this, students realize that their choice of problem may not have turned out to be what they expected. This reflects on their level of understanding. It becomes a self-assessment that maybe they don't know this specific topic as well as they thought. The result is a sense of difficulty versus a sense of fairness.

For anyone interested, I can send you a pdf version of the exam and Microsoft Excel spreadsheet template for recording exam results. Just complete the contact form with your request.

My school is headed off to Spring Break and we won't start back up until April 2nd. I'll pick up with the class then.

Visit the class summary for a student's perspective and to view the lesson slide.

Wednesday, March 20, 2013

IPS - Day 38

Today was spent working through practice problems. I also briefly discussed the structure of the mid-term exam. I'll cover this in more detail on my next post.

The problems presented dealt with probability and simulation. The problems were well thought out and one presented a nuance that needed to be considered in order to obtain a correct solution. This problem took students a while to complete as it also involved simulating a random drawing problem without replacement.

The second problem dealt with a more traditional probability problem and required the construction of a table or Venn diagram to represent the situation. The problem then asked for conjunctive, disjunctive, and complementary probabilities. Most students seemed very comfortable with the second problem.

These two problems and questions about the mid-term structure took the entire class.

The next class is the mid-term.

Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 38

Today's focus was working on problems created by students.

The first question posed proved to be difficult for students. What is φ(90)? The difficulty came in students just started to calculate the value directly rather than recalling properties of φ(n), specifically that φ(nm) = φ(n)φ(m) when gcd(n,m)=1. So φ(90) = φ(9 x 10) = φ(9)φ(10)= 6 x 4 = 24.

Another number theory problem asked to find the gcd(88, 160).

Two problems presented then dealt with ciphering. These were relatively straightforward problems. For example, encode and cipher the word "towel" using C(x) = x + 3.

Next came a couple of counting problems. One dealt with how many toppings did a pizzeria have if they offered 66 different two topping pizzas? Another asked how many 8 character passwords could be created if you were required to use one number and any number of upper-case letters, lower-case letters, and digits for the remainder of the password?

Overall, the problems provided students an opportunity to revisit some problems that they may not have worked on recently.

Next class will be the mid-term.

Visit the class summary for a student's perspective and to view the lesson slides.

Tuesday, March 19, 2013

IPS - Day 37

This was the start of a two-day review prior to the mid-term exam at the end of this week.

Rather than repeat myself about the structure of the review process and its origin, I refer you to my Discrete Math post which goes into how the review process works.

For this class, I provided four topic areas and first asked students to come up with the top five concepts, ideas, or formulas that we have studied so far. Then I had students share out their ideas in their groups. The groups then worked on creating a list of five items under each of the four given topics.

Conditional Probability and Independence

  • Two or more variables relate to each other
  • P(A and B) / P(B) = P(A|B)
  • Independent means that event A is not relevant to event B
  • To show independence show that P(A)P(B) = P(A and B)
  • Conditional probability is when one event occurs assuming a past event occurred
Expected Value
  • Simulations
  • Outcomes
  • E(X) = Sum of xP(X=x)
  • Random numbers
  • Mean
  • Sounding the Alarm investigation
  • Deal or No Deal investigation
  • Probability models
Probability
  • Rules
  • Tree Diagram
  • Venn Diagram
  • Random integers
  • Tables
  • Probability Model
  • Law of large numbers
  • Drawing cards and rolling dice
  • Experimental versus theoretical probability
  • Theoretical probability = number of favorable outcome / total outcomes
  • Mutually exclusive events
  • If A and B are mutually exclusive then P(A or B) = P(A) + P(B)
  • Independent events
  • If A and B are independent then P(A and B) = P(A)P(B)
Sampling and Bias
  • Bias is when one answer seems better than other answers
  • Random / equal chances
  • Random samples are not biased
  • Surveys
  • Error is expected
  • Observational study
  • Sample size can be small if representative
  • Sample types: cluster, stratified, systematic, SRS, multi-stage
Groups were then asked to develop problems for the different categories that will be presented to the class next period. The groups needed to answer the questions they developed. These questions will be the review problems that students will work on.


Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 37

Today we began the review process for the mid-term exam at the end of the week. The process I use came from an article I read in The Teaching Professor Newsletter. The premise is to let students build their own review guide and problem set.

To start things off, I ask students to list the five big topics studied in the course so far. I provide a limited time as I want them to focus on top-of-mind awareness. Next, I have students partner or work in groups to come up with an ordered list. In this work I encourage students to look through their notes to assist in getting good coverage. We then share out the topics.

I'll start with one group's list and then go around the class having other groups add topics not already listed. I will then group topics using a colored marker, for example indicating that counting, permutations, and combinations are all connected topics.

The class provided good coverage of the topics we have worked on this semester:

Counting topics

  • Counting
  • Combinations and permutations
  • Exponential powers

Figurate numbers and Patterns

  • figurate numbers (pentagonal numbers)
  • Gaussian summation
  • Pascal's triangle
Prime numbers and Ciphers
  • prime numbers
  • Euclidean algorithm
  • greatest common divisor
  • ciphers
Probability
  • probability
  • conditional probability
  • Bayes theorem
As the list grew, one student commented by saying, "Wow, we've covered a lot of math already." Yes, they have and there's still a lot more to come.


Students are then asked to develop three practice problems reflecting different topics and different difficulty levels. I circulate around and push students to consider the problem they are posing and the solution that results. The tendency is for students to create simple problems, but with a little push they respond to upping the complexity and requirements of the problems they create.

I then ask students to have their problems created in a format that can be shared with the class and that they should have the solution developed for their problem so they can check the work of their classmates.

The topic listing activity helps to focus the problems that are created. The problem creation activity actually requires students to think at a deeper level about the topic, the wording of a question, and the solution for the problem. This provides their first review practice.

In the next class we'll work through the problems created in class today. I have extra practice problems ready in case more are needed, though providing 15 to 20 problems from the students generally is enough practice for a single class.

Visit the class summary for a student's perspective and to view the lesson slides.

Monday, March 18, 2013

IPS - Day 36

Today we wrapped up looking at experimental designs. To begin things, we finished watching the Against All Odds experimental design video. This video nicely lays out key components of a well designed experiment and provides clear examples of how these components are implemented.

After viewing the video, I briefly reviewed key aspects of experimental design. We then discussed some ethical issues with experimentation. The video brought up one such situation and I provided several others. The discussion brought forth several questions and issues that the students thought of with regard to ethics. Of particular interest was the idea of informed consent and whether the use of a placebo may have a detrimental impact on a subject.

Class concluded with students asked to design a study that would provide evidence for a quote lifted from a magazine article. We'll start class tomorrow looking at and discussing their designs. In particular, we'll look at the design type (survey, observational study, or experiment) and discuss the pros and cons of that particular design.

Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 36

Today was a transition day to begin reviewing for the upcoming mid-term. The mid-term will cover all material up to this point. Because we have a Spring break next week, I didn't want to start looking at congruence and modulo arithmetic until after the break. This leaves two full days for review.

Today, I started class by giving them a cipher function and asking them to use it. I debated in my mind whether or not to use a valid cipher function. I elected to use C(x) = 2x + 3. When using this cipher function, some letters get mapped to the same value. What I debated with was whether or not this would be the time to introduce the idea that not all equations work as cipher functions.

Students worked through the cipher and did notice that the cipher mapped different letters to the same value. I told them that we would look at why this happened after the break but to just proceed for now since we weren't going to decipher any messages. I believe this will enable me to re-introduce this idea after the break and ask the question of what went wrong with the cipher function and how we could avoid a similar problem. The result we need is directly connected to the concepts of congruence and modular arithmetic that we will be studying.

After ciphering text I asked students to find D(x), the decipher function, for C(x). Many students struggled with this. They would come up with a result and I would ask them if they tried verifying their result. They were unsure how to do this. I told them if they picked a value and passed it through C(x) then when that result was put in D(x), in essence calculating D(C(x)), the should get their original value used for x back. Once they tried this for a specific value they could see that their decipher function was not correct.

Eventually, students started to remember how to find inverse functions of linear equations and found that D(x) = (x - 3) / 2.

The next problem I posed was related to data encoding. I reviewed the ideas of bits and bytes and then asked students to count the number of characters that could be represented by one byte, which consists of eight bits containing either a zero or one. Students were somewhat tentative, but most figured out that the result was 28 = 256 characters.

I briefly described how this allowed the representation of 26 lower-case letters, 26 upper-case letters, 10 digits, plus many special characters such as punctuation with allowance to represent additional characters. However, the Chinese character set contains over 2,000 characters. How many bits are need to represent 2,000 characters?

Students quickly figured out that 11 bits would be able to represent 2,000 characters. Computers use bytes as their base unit of storage, so 11 bits means that we actually need to use two bytes. How many characters can two bytes hold? Students extended what they were doing to see that two bytes hold 216 = 65,536 characters.

These problems are perfect extensions for the material that was just covered. They connect the ideas of data encoding that are fundamental to cryptography with the basic concepts of counting. The need to know how many things can be represented in one or two bytes is a critical aspect of data representation in computer science. Historically, it was a significant change to convert from single-byte to double-byte representations to address issues arising from the expansion of the world wide web and the global economy.

Visit the class summary for a student's perspective and to view the lesson slides.

Friday, March 15, 2013

IPS - Day 35

This lesson focused on the fundamental differences between observational studies and experiments, which are the last two methods we will study for gathering data.

To begin, the class was asked what comes to mind when they hear the term observational study. Students thought about watching without interacting, looking for details, and being an objective third party. This basically gets to the essence of an observational study. In an observational study choices are not assigned and the observer simply sees what has happened in the past or what unfolds before them as they observe.

I briefly reviewed the ideas of a retrospective study and a prospective study, providing simple examples to help clarify their meaning. A retrospective study involves looking through existing data to accumulate data. A prospective study looks at what happens in an ongoing basis.

In contrast to observational studies, a researcher could conduct an experiment. Experiments are characterized by

  1. the manipulation of treatments that are administered to subjects or experimental units, 
  2. the random assignment of groups to different treatments, and
  3. the comparison of results between treatment groups.

I checked students' understanding by providing three simple examples and asking them whether each was an observational study or an experiment and if an observation study whether the study was a prospective or retrospective study. This helped as many students thought the examples were all experiments. As we discussed these I asked students to focus on whether or not a treatment was being manipulated by the researcher. Using this criteria, students then were able to correctly classify each example.

I asked students to define what an observational study and an experiment would look like if we were interested in studying soup preferences. Students worked in their groups on these and then we discussed. The class seemed comfortable with the idea of how to structure an observational study. They were not as sure about structuring an experiment. I again asked students to consider how to manipulate treatments and how to include random assignment. As this was the first look at experiments, I wasn't concerned that a workable structure was identified, only that the key aspects were being considered.

After this, I provided four more examples for students to consider. In this go around students did a much better job differentiating between an experiment and an observational study.

I started a video on observational studies and experiments. I wanted the class to see an actual observational study in action. The Against All Odds videos, while somewhat dated in look, still provide a solid presentation of the topics. For this particular class I used Video 12: Experimental Design.

We'll finish the video and take a closer look at the structure of experiments and ethical issues surrounding experiments next class.

Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 35

Today we expanded the idea of a cipher function to be a linear function. When a cipher function has the form C(x) = mx + b (mod 26), the cipher function is called an affine cipher.

In order to use an affine cipher, values must take values modulo 26. I explained to students that the process of modular arithmetic makes things easier. The best part is that since the class used remainders while working through the Euclidean algorithm, they were already familiar with the idea of using remainders.

The process of applying an affine cipher involves these steps:

  1. Encode your message using a=>0, b=>1,..., y=>24, z=>25
  2. Apply your cipher function C(x) = mx + b
  3. Find C(x) mod 26 by dividing C(x) by 26 and finding the remainder
  4. The remainder becomes your digital ciphertext
  5. Convert your digital ciphertext to ciphertext by converting the remainder to the corresponding letter
For example, let's use C(x) = 3x + 7 and the plaintext "Hello." 

The digital plaintext for hello is 07 04 11 11 14. Applying the cipher to each of these values produces the values 28 19 40 40 49. Taking each of these values mod 26 yields digital cipher text of 02 19 14 14 23. Decoding this to ciphertext produces "ctoox."

Students were asked to pick values for the slope and intercept of a linear equation that they could use as a cipher function. I instructed them to use smaller values since this exercise was to help them get a better sense of how the process works and what the results are when ciphering a message using an affine cipher.

While there were a few questions, most students picked up on the idea and were able to cipher the message "Hello World." For the most part, students started the process and wanted to verify that they were proceeding ahead properly. The other questions arose when they had to determine the result modulo 26. Once they understood they were finding the remainder after division by 26 they were able to proceed on.

The next step in the process was for students to swap cipher messages and to see if they could determine what values were used by their classmate to create their affine cipher. Before doing this, I briefly touched on the idea of how to find an inverse to a linear equation.

If C(x) is our cipher function and D(x) is our decipher function, we must have D(C(x)) = C(D(x)) = x, that is, C(x) and D(x) are inverse functions. Since C(x) is an affine cipher, C(x) = mx + b for some integer values m and b. Therefore C(D(x)) = mD(x) + b = x, which means that D(x) = (x-b)/m.


Students attempted to break the code of their classmates knowing that the plaintext message was "Hello World." They struggled with breaking the code since we are not dealing with a simple inverse but an inverse modulo 26. This makes find the inverse values much more difficult.

We discussed the issues with ciphering and deciphering. Students had no issues with ciphering but found the deciphering piece impossible. I pointed out the complexities of finding the inverse modulo 26 made this task difficult.

Students need to understand how modular arithmetic works and different properties and characteristics they can use when working with values modulo 26 or modulo any other integer value.

This will be the next area of investigation for the class.

Visit the class summary for a student's perspective and to view the lesson slides.

Thursday, March 14, 2013

IPS - Day 34

Today's focus was on bias. I like to start off the discussion of bias with a look at a situation involving target shooting with a bow and arrow. I provide three type scenarios of missing the bulls-eye and ask students to consider what is error and what is bias.

The common perception of error is, many times, a statistical bias. And, statistical error is often not viewed as error, since this term is commonly associated with the idea of making a mistake.This helps students to understand that in statistics, an error is a natural occurrence of random fluctuation while bias is a systemic variation that pulls us off target from the population we are interested in studying.

I asked students to consider how bias may arise. They discussed this in their groups and we shared out as a class. The results included response bias from question wording, topic, interviewer and undercoverage.

I stepped through various bias, giving examples and firsthand experiences when applicable. The bias topics covered were:

  • voluntary response bias
  • convenience sampling
  • undercoverage
  • non-response bias
  • response bias
Voluntary response bias occurs when a survey is offered and all responses made to the survey are counted. There is no attempt at structuring the sample. A good example of this are web site pop-up surveys. The pop-up is offered and anyone willing to respond is counted. Those who have a strong opinion, either positive or negative, tend to respond.

Convenience sampling is taking a sample through the path of least resistance. Whatever offers itself as the easiest way to gather data is taken. In business, customer surveys are often conducted by surveying the best customers because they are the ones that are most likely to respond.

Undercoverage occurs whenever a group within the population is either not sampled at all or enough. For a school survey, obtaining a sample of 80 students and then finding only 5 (less than 10%) of the sample is seniors would be an example of undercoverage of this population segment.

Non-response bias occurs when someone is asked to participate in the survey and refuses to respond. This is a wide-spread problem. When I first started working in marketing research, non-response rates were in the neighborhood of 20%. Today they are closer to 70%. This is a huge issue that is being extensively researched. The issue is that once someone doesn't respond there is no way to know why or what their characteristics are to adjust for the non-response.

Response bias results from any issues with the survey instrument itself or in the survey's administration. The length of the survey, the wording of the questions, the attire of the survey administrator, or the survey topic can all cause response bias. I participated in a survey one time in which the surveyor basically answered all the questions for me. This was response bias. 

We finished class by having students work through some questions about sampling designs and questions. They were to identify which sampling plan would have the least bias and which questions needed to be re-worded in order to reduce bias.

We'll go over these in class and then cover the idea of observational studies and experiments as two more ways to gather data.


Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 34

Today we continued exploring fundamental concepts in cryptography. Specifically, today was focused on differentiating ciphering from encoding and the introduction of functions as ciphers.

With the previous lesson, students had a foundation for the idea of a cipher. The use of a shift cipher allows students to understand the basic process of ciphering and deciphering without getting bogged down in any difficult mathematics or technical process issues.

The idea of encoding is introduced as the mechanism of converting plaintext to digital content. I briefly discuss the idea that machines are built of circuits and that a circuit can either be on or off. This means that computers can fundamentally only understand values of zero or one. I introduce the idea of a bit as being a single on/off value and a byte as containing 8 bits. Since most students are familiar with the terms kilobyte, megabyte and gigabyte, it is easy to make a connection to this idea and to then relate that a kilobyte is 1,000 bytes and a megabyte is 1,000,000 bytes.

With this foundation I introduce the process of

  1. creating plaintext, 
  2. encoding the plaintext to digital plaintext, and
  3. applying a cipher to create digital ciphertext. 

To reverse the process you would

  1. decipher the digital ciphertext to digital plaintext,
  2. decode the digital plaintext to plaintext, and
  3. read the plaintext.

We make use of a simple encoding scheme: a => 01, b => 02, c => 03, ..., y =>25, z => 26.

I have students use this scheme to encode and decode a couple of messages to be sure they understand the process. Students then started to apply a cipher to their digital plaintext. At this point, a couple of students asked what happens if your value goes over 26? A couple of other students said you should just wrap the values around and start over. I told students we would discuss this in just a few minutes.

First, I wanted to focus on the functional aspect of the cipher. It gets cumbersome to write something such as digital ciphertext = digital plaintext + 5. We introduce functional notation and let C(x) represent a cipher function. We can then write C(x) = x + 5, where x is the digital plaintext value. Students practiced encoding and ciphering a brief message using a cipher function.

The process of deciphering now becomes an application of an inverse function. The deciphering function D(x) needs to be the inverse of C(x) since we need D(C(x)) = x. This is, definitionally, an inverse function.

Now the process of ciphering and deciphering can be considered from a purely functional aspect. We use a function, C(x), to cipher digital data and then apply its inverse, D(x), to decipher the ciphered digital data.

One final issue needs to be addressed. If we use the encoding scheme of a => 01,...z=>26 and we try to simply subtract 26 from each value, we have z=>00 which is not a valid value. To accommodate an easier process, we use the encoding scheme of a=>00, b=>01,..., y=>24, z => 25. Now we have a complete cycle in sequence.

I let students work through this idea and practice using the new encoding scheme along with a cipher function.

This process sets the stage for introducing the idea of modular arithmetic, which we'll need as we move toward more advanced ideas in cryptography.

Class concluded with students summarizing thoughts about today's lesson in their notes.

Visit the class summary for a student's perspective and to view the lesson slides.

Wednesday, March 13, 2013

IPS - Day 33

Today was a short day due to state testing. The focus today was to solidify students' understanding of different sampling techniques.

To start things off we looked at the results of systematic sampling. The distribution was highly concentrated with all mean areas falling between 6 and 8 square units. At the same time, the center of the distribution was still very close to 7 square units, which was consistent with the other sampling techniques and which was the point.

I then had students create a table listing the various sampling techniques we have looked at: simple random sample (SRS), stratified sample, cluster sample, systematic sample, and multi-stage sample. I asked students to think about advantages, disadvantages, and questions they had for each.

This exercise really helped students to differentiate between the sampling techniques. Students started to better grasp what made cluster sampling different from stratified sampling. When discussing a SRS, students considered the idea that you could end up with a sample that might miss a key group, while a cluster sample may be overly generalizing a population's characteristics.

Because of the shortened class period this was all we were able to accomplish today. We'll take a look at bias next class. This is a natural extension as students mentioned bias as a possible disadvantage several times.

Visit the class summary for a student's perspective and to view the lesson slide.

Tuesday, March 12, 2013

Discrete Math - Day 33

Today we took our first look at cryptography. This was a shortened day and this lesson fit in perfectly.

Class started with a simple example to illustrate the ideas of plaintext, ciphers, and ciphertext. The class was asked how they might make the text "secret." After brief discussions in their groups most were unsure how they would proceed. A couple of students suggested shifting letters.

This idea is exactly what Julius Caesar used. The traditional Caesarean cipher shifts letters by three characters. A student asked how a translation would occur as they were not clear on the process. I used the plaintext "zoo time" to illustrate how the cipher would convert this to the ciphertext "crr wlph."

I asked students to convert the plaintext "You too Brutus" to ciphertext. This allowed students to get a better sense of the process of ciphering. The result was "Brx wrr Euxwxv." Students seemed comfortable with the idea, so we were ready to move on to the next challenge.

Students picked their own value to shift and then used their cipher on the plaintext "I conquer." They swapped messages with a classmate and tried to determine the value used to shift letters. Even though the students knew what the plaintext message was, there were still some challenges in determining the value used in the shift.

Next, I had students create a brief plaintext message of 10 characters or less. They then used their own shift cipher. Students swapped ciphertext and tried to determine the plaintext message. Students found this extremely challenging. Most students made little or no headway in determining the plaintext. A couple of students wondered if they could look at the most frequently used characters.
I displayed the relative frequency of occurrence for different letters in English. I told students they could try assuming the most frequently used letter was e and see if this led to a translation. If not, assume the most frequently used letter was a t and try again. They could continue on in this manner to see if they could de-cipher the ciphertext.

This process still had little success. I pointed out that computers can easily be programmed to de-cipher ciphertext using shift ciphers. I used an Caesarean shift applet and a student's ciphertext that was school appropriate to show how easily a computer could de-cipher ciphertext.

Shift ciphers are not secure. A more sophisticated and secure method for creating ciphers is needed. As it turns out, prime numbers play a key role in the development of secure ciphers.

Class concluded with students summarizing their thoughts about ciphers and questions they have about secure ciphers.
Visit the class summary for a student's perspective and to view the lesson slides.

Monday, March 11, 2013

Discrete Math - Day 32

Today we wrapped up looking at the Euclidean algorithm.

The first question posed was will you always reach a remainder of zero when using the algorithm? This stumped students. I don't know if it was because of the question or because it was early on a Monday morning but the ideas and discussions were not flowing.

I asked students what they could tell me about the remainder r when the integer b is divided by the integer a. This elicited a response that the remainder must be less than a, i.e. r < a. When nothing else was forthcoming I asked if it was possible for a remainder to be negative. Students responded that the remainder could not fall below zero. I wrote the following inequalities on the board: 0 ≤  r < a.

I then asked what would happen when we proceeded with the Euclidean algorithm and divided a by r? Students said the new remainder, r1, would be less than r. I wrote down 0 ≤ r1 < a.

I told students we could continue with this process and wrote the following:

     0 ≤  r < a
     0 ≤ r1 < r
     0 ≤ r2 < r1
     ...

     0 ≤ rn < rn-1

This finite sequence has a lower bound of zero. In addition, in at most a steps, we are guaranteed to reach a value of zero. [There are proofs for the number of steps that relate the value to the golden ratio and that a worse case arises when the two values being evaluated using the Euclidean algorithm are Fibonacci numbers. While this could be an interesting investigation for a college level course, it is sufficient here to have students realize that eventually a remainder of zero should be obtained.]

The next question posed was why does the algorithm work. While most students struggled with this question, a couple of students had the idea that the final division, resulting in a remainder of zero, showed that the greatest common divisor of the last two values used was the final divisor. This final divisor was also a factor of the previous division, and so on until you reached the original two values.

This provided a solid intuitive connection to why the algorithm works. Look at the problem of the integer a dividing the integer b with quotient q and remainder r. Then b = qa + r. Let g = gcd(a,b) then g|a and g|b, so g|qa and g|(b-qa)=r. Therefore g = gcd(a,b) = gcd(a,r). We see with each successive division that that g is the greatest common divisor and thus will yield the greatest common divisor.

I provided four problems for students to work through. This enabled students to work through using the Euclidean algorithm and allowed me to see if they understood the process involved. 


Visit the class summary for a student's perspective and to view the lesson slides.

IPS - Day 32

Today we finished looking at rectangle sampling investigation. The sampling techniques used were stratified sampling and cluster sampling.

In looking at stratified sampling, the tendency is for students to just take the 10 rectangles they selected and calculate the average size of the 10. It needs to be emphasized that the two groups are not of equal size and therefore the areas of each rectangle cannot be treated equally. Each rectangle in the first group represents 59% of all rectangles while each rectangle in the second group only represents 41% of all rectangles. We therefore need to calculate a weighted average to account for this difference.

Once students realized this was needed they quickly found their sample, calculated their average rectangle size, and plotted the data on a dot plot. We converted the dot plot to a histogram and discussed the similarities and differences to our simple random sample plot.

The distinguishing characteristics of a stratified sample are noted: the population is broken into groups, the groups must be put together in order to have a complete picture of the population, the size of the groups must be accounted for by weighting in order to keep the sample representative.

We then looked at cluster sampling. Some students needed guidance as to randomization for the sample. In this case the two clusters being selected are selected randomly and then a census is conducted within the two selected clusters. Students generated their data and we looked at the dot plot and histogram.

When comparing the graphs of the simple random samples, the stratified sample, and the cluster sample, students noted that there was less variation in the stratified and cluster samples. They also noted that all four graphs were unimodal and centered near 7. The idea of symmetry and skewness was introduced as students noted the uneven distribution present in a couple of the graphs.

I had the students read through 5 scenarios to check their understanding of the sampling techniques used. Students seemed comfortable with the sampling techniques as almost everyone got all five correct.

For homework, I asked students to conduct a systematic sampling of the rectangles. We'll compare this distribution with the others next class.

Visit the class summary for a student's perspective and to view the lesson slides.

Friday, March 8, 2013

IPS - Day 31

Today we explored sampling and bias. To start things off we reviewed a couple of the sampling plans that students created. This allowed for a discussion of how to introduce randomization into the designs and to discuss whether or not the sample would be representative.

The discussion brought up issues that included the topic or objective of the sample. For instance, students wondered whether or not sampling from attendees of a dance would generate a representative sample. If the topic of the survey were about the quality of the DJ this would be an acceptable sample space. However, if you are trying to determine the next dance's venue, this would not be a representative sample of students as you are missing those students who did not attend the dance because they did not like the current venue.

One student then said it sounded as if there were never a perfect sample. This is true. You can only hope to create as representative sample as you can with as little bias as you can. It won't be perfect but by following good sampling procedures and techniques it will get you closer to the population that you are interested in studying.

We then looked at an example of a biased sample compared to a simple random sample. To do this I followed the Random Rectangles investigation. There are many versions of this activity available. I used the one from NCTM's Navigating Through Data Analysis Grades 9-12.

Students could see clearly that their subjectively selected rectangles had a dispersed, almost uniform distribution. The simple random sample generated a much more compact, unimodal distribution that was roughly symmetric. The mean area for the subjective rectangles was approximately 2 square units larger than the simple random sample's mean.

Next, students compared a simple random sample of 5 rectangles to a simple random sample of 10 rectangles. In this case, the sample size of 10 generated a unimodal, roughly symmetric distribution that had approximately the same mean but was even more tightly compressed around the mean.

Students seemed to understand how to generate a simple random sample. They also readily recognized that the simple random sample was generating a more representative sample.

Next class we'll compare simple random samples to cluster and stratified samples.

Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 31

Today's class started off with a quiz. There were three problems: 1) finding a moderately large prime number, 2) calculating a discrete probability, and 3) using Bayes theorem.

After the quiz we dove into the Euclidean algorithm for finding a greatest common divisor. This is an easily accessible algorithm since students generally know (but may not like) the process of long division. The tendency is for students to want to convert their remainders to a decimal or fraction. It takes some convincing that we just write the result with a remainder, much as they used to when first learning division in elementary school. I tell students that the remainder is actually a very valuable entity that we will make use of extensively.

I have students perform one or two simple long division problems just to rekindle that long lost love of the process. I will eventually show them how to get the remainder from a calculator, but for now it helps them better understand the algorithmic process.

I have students use one of these problems and divide the divisor by the remainder successive times until they end up with a remainder of zero. Students then look at their last non-zero remainder and check to see if this non-zero remainder divides the two original values used. Students then find the greatest common divisor of the two original values and see that they are the same. The question is will this happen again?

I post four values and ask students to pick one pair and try out the process again. Most students dive right into this process but a few wanted to verify they were following the procedure correctly. The only point of confusion arose with the values of 45 and 225. In this situation, there technically is no division that takes place where there is not a remainder of zero. I realized that, although the Euclidean algorithm is traditionally defined by looking for the last non-zero remainder, that it would be less confusing for students to think of what was the last divisor (the value used to divide with) that produced a remainder of zero. In the case of 45 and 225, 45 is the last divisor that produces a zero remainder. Students saw that the process did produce the greatest common divisor. I referenced that the process just followed is called the Euclidean algorithm and it has been known for over 2,500 years.

I then asked the class what algorithm meant. Typically students are not clear as to what the word algorithm means. Dictionary.com defines algorithm as "A process or set of rules to be followed in calculations or other problem-solving operations." This is basically how I describe algorithms to students. An algorithm is a process defined by a series of steps that are repeated until a solution is reached.

I then present the Euclidean algorithm in a more systematic, procedural fashion. The initial condition is established, the procedural process is defined, and the termination of the process is defined. Although there are close ties to computer programming, I do not go into the explicit connection in this class. This could be an interesting extension for a class that wants to use a discrete mathematics course to establish the mathematical foundations for a computer programming or computer science class.

The big question hanging out there at this point is will the algorithm always result in a zero remainder? Class time was up at this time, so I posed the question and asked students to ponder this question. This is where we'll start things off next class.

I was looking ahead to the next few lessons and I am excited about the progression of the lessons. From the Euclidean algorithm we will move directly into elementary concepts of cryptography. These concepts use the ideas of division and remainders, so there is a direct connection. This will also introduce ideas about modular arithmetic and drive the need for students to better understand modular arithmetic and congruence. This leads back into exploring these ideas with their direct connections to the prime number investigations with which we have been working. After, we go back into cryptography and make use of congruence and modular arithmetic. This leads to ideas in modern cryptography that incorporate Euler's φ-function. Thus, we circle back through ideas that we started with at the beginning of the unit.

Visit the class summary for a student's perspective and to view the lesson slides.

Thursday, March 7, 2013

IPS - Day 30

The focus of today's lesson was on sampling, specifically how do we generate representative samples.

To kick things off, I asked students what characteristics would they like a sample to possess. The class did a nice job considering things like gender, ethnicity, age, income, likes/dislikes, and beliefs. In all, their concern was to get representation of these various groups in their sample.

Now that students were focused on the idea of getting representation, I asked them how to determine who to select for a sample of students from the school. The only restriction placed on the sample was that they needed a sample of between 30 and 160 students. I needed to emphasize that the task was not to create questions they would want to ask students; their task was to decide which students they would select to ask.

With that, the class worked in groups for approximately six minutes on determining what they would do. A couple of groups really struggled with the task. They wanted to come up with extensive selection criteria. I asked them how they would determine which students to ask to go through their screening process. This pushed their thinking but not enough to have them view the situation from a different perspective.

I told students the three big ideas of sampling:

  1. Generating a representative sample that reflects the population we are interested in studying
  2. Randomizing selection to eliminate or reduce any factors that we might not have considered
  3. Creating a sample whose size was not too large
We then looked at the sampling designs that the students created. Five different sampling techniques were written on the board. The results provided examples of stratified sampling, cluster sampling, and multi-stage sampling. I discussed how these illustrated these sampling techniques and the key characteristics of each. I also discussed systematic sampling and simple random sampling.

Students seemed to grasp these ideas. For the last 15 minutes of class, I had students decide on two ways of generating samples. The intent was for students to provide more detail on their sampling plan. As I reviewed their designs, the one theme that reoccurred was randomization. Where does randomizing occur in the design? Although students were thinking implicitly that a classroom or a group would be selected randomly, I wanted them to explicitly describe where the randomization took place in their design.

We'll look at what they came up with next class. 


Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 30

Today we continued looking at prime factorization and making connections to the greatest common divisor (gcd). I started by asking students what result they had for how many factors 1872 possessed. As anticipated, only a handful of students had worked on this at home. I gave them 65 seconds to find the prime factorization of 1872. The prime factorization of 1872 is 24 x 32 x 13.

I asked students what this told them about the number of factors for 1872. Many were unsure what this told them. I related this back to the shirts and pants problem when we were counting. If we had 3 pants and 4 shirts we could mix and match these 12 ways. The 32 generates 3 distinct factors: 1, 3, and 9; the 13 generates 2 distinct factors: 1 and 13. How many ways can we mix and match the 3 factors of 32 with the two factors of 13? This is exactly the same situation as the shirts and pants problem.

Students started to multiple and came to a solution of 30. One student had tried to list all of the factors and had come up with 29. Of course using counting techniques is much faster and more accurate.

I pointed out to students they should make use of their divisibility rules to help them with prime factorization. It is readily apparent from divisibility rules that 1872 is divisible by 9 since the sum of its digits is divisible by 9. In addition, the last three digits are divisible by 8 so 1872 is also divisible by 8. Once these are factored out you are left with 26, which is easily factored to 2 x 13.

I gave the students the number 1976. They worked through this value and found the prime factorization was 23 x 13 x 19. We talked through the number of factors and more students were comfortable with seeing that 1976 had 4 x 2 x 2 = 16 factors.

I gave the class one last number to work with: 3345. The prime factorization went much faster this time as students realized the number was divisible by both 3 and 5. Factoring out 15 left a value of 223, which they saw from their prime number list was also prime. So 3345 = 3 x 5 x 223 and had 8 factors.

I asked students to summarize in their notebooks how to count the number of factors that an integer has.

I then asked students to find the greatest common divisor for three different pairs of numbers. Students typically made factor trees and used these to find the gcd. We shared out results to make sure everyone was in agreement.

Next I asked students to determine the prime factorization for all of the numbers used. We looked at the prime factors and tried to make connections back to the gcd of each pair. Students recognized that the gcd was the product of all the common factors.

I asked students to pick two numbers, one a two digit number and one a three digit number and see if this worked. Students worked through their pairs and found this was true. The question is whether or not this would always work?

This problem lends itself to illustrating why proof is so important. We found a simple, convenient way to find a greatest common divisor of two integers and would like to be able to use it with confidence. Without proof, we could use the process but then would need to verify that the value we calculated was, in fact, the greatest common divisor. If this result were proven true for all integers then  we would not have to confirm the result each time.

I asked students to consider how they could create an argument that this process would always work. I don't expect students to create a highly detailed, formal proof in a class like this, but I do expect students to be able to provide an informal argument that would basically provide the proof outline but may lack some detail required of a formal proof.

After allowing students some time to think about what they could say, I provided an informal proof similar to what I would expect students to present. The proof below includes italicized material that provides additional foundation for the proof but is not something I would expect the typical student to provide,which is bolded.

Greatest Common Divisor Conjecture: The greatest common divisor q of two integers n and m is equal to the product of all the common prime factors of n and m.

We will make use of the previous theorem that every integer has a unique prime factorization and that if an integer a = bc and p|a then p|b or p|c.

First, let q be equal to the product of all prime factors that are common between n and m. Then q|n because n = qr where r is the product of all the primes that divide n but are not common to m. By definition, this means q|n. Similarly we see the q|m.

We have now shown that q divides n and m. Is there any number s such that s > q and s|n and s|m? For this to happen, s would have to be a product of prime numbers that appear in both n and m and not contain any primes that do not appear in both, otherwise s would not divide into either n or m. Since s can only contain primes that are factors of both n and m and q contains all of these primes, s <= q. Therefore q is the largest integer that divides both n and m and is, therefore, the greatest common divisor of n and m. Q.E.D.

Students needed some time to absorb the argument but many started to see how the logic tied the concepts together to demonstrate the truth of the conjecture.

Class concluded with students summarizing in their notes what they learned today.

Visit the class summary for a student's perspective and to view the lesson slides.

Wednesday, March 6, 2013

IPS - Day 29

Today was focused on looking at graphs of data. The student survey data was used and students presented their graphs to the class. The presentations were used to emphasize the characteristics of good graphs and what was and was not appropriate graphical representations for categorical and quantitative data.

Most students did well with categorical graphs. Students used either pie graphs or bar graphs to present their data. With the exception of minor labeling issues the graphs presented were well constructed and labeled.

For the quantitative data there was more variation. Students created histograms for number of sisters and then had the x-axis labeled as if it were a category. Bin sizes were not always equal on graphs. Labeling was not as consistent.

One student created a line plot for gas prices. This lead to a discussion of the appropriateness of this graph. When asked if there were two variables being analyzed the response was no. I then pointed out that the order of the data set was arbitrary and could be reordered. This would results in an entirely different graph.

No students created a stem and leaf plot, although most students had indicated that they knew how to create one. One student created a box plot. The graph was well labeled. The whiskers were drawn to the minimum and maximum. I didn't check to see if there were outliers but the proper construction of a box plot making use of the 5-number summary and IQR will need to be addressed.

The plan is to cover these types of issues as the need arises while analyzing the data that the class generates.

Visit the class summary for a student's perspective and to view the lesson slide.

Tuesday, March 5, 2013

Discrete Math - Day 29

Today we continued looking at how many factors a number contains. Students were still struggling with finding connections, so I asked them to generate data but to organize it in order to make it easier to find patterns.

I asked students to look at the number of factors for prime numbers, square numbers, and composite numbers made from two primes. This resulted in students uncovering the following:

  • Prime numbers always have two factors
  • Squares of prime numbers have 3 factors
  • Composite numbers made from two prime numbers have 4 factors.
I then focused on the number 6 and asked why it would have 4 factors. I asked the class to consider this from a counting perspective. Students still were unsure, so I asked them to look at the individual factors for the two prime numbers (we knew that 2 had 2 factors and 3 had 2 factors). The factors of 2 are 1 and 2 while the factors of 3 are 1 and 3. How many ways can these factors be combined?

I used a tree diagram to help illustrate what was happening. Some students started to have a glimmer of understanding. I next drew a tree diagram for 30. It started off exactly like the one for 6. We then had to include two new branches off of each end. This results in 8 total factors for 30.

I then turned to 120 and wrote down its prime factorization: 23 x 31 x 51. I wrote down that 120 possessed 16 factors. One student wanted to know if I had memorized that value. I replied that I was making use of what we were doing to calculate the number of factors. We talked about how 23 would generate 4 factors. I drew a tree diagram to illustrate the point. I asked the class what would happen when we added the factors of 3 onto the diagram. The result would double. When we add the factors of 5 onto the diagram the result doubles again.

You can quickly count the number of factors an integer has by adding one to each exponent and then multiplying the values together. For 120, it has (3+1)(1+1)(1+1) = 16 total factors.

I asked the class to determine how many factors 1872 has for homework.


Visit the class summary for a student's perspective and to view the lesson slides.

Monday, March 4, 2013

IPS -Day 28

Today we started looking at data collection issues. The focus on this class was data integrity and preliminary analysis. The goal was for students to consider the quality of their data and to understand what they already know about data analysis.

I used the survey data that was collected on the first day. To start things off, I just briefly reviewed the questions asked and then showed them the data set. I had key entered the data exactly as was written down and had made one key stroke error in gasoline prices by omitting a decimal point.

Students looked through the data and we corrected all of the data values except one, which we had to delete as invalid. We then moved to a discussion of categorical versus quantitative data. I asked students to classify the nine variables into one of the two columns. Everyone agreed on the classifications except for shoe size.

Shoe size is always an interesting discussion. It boils down to shoe size measuring shoe length even though the measurement units are discrete.

The next topic was summarizing the data. I asked students to summarize the data by asking what they would say if someone walked in the room? I asked students to focus on two columns of data: political leaning and amount paid for gas. Students worked on calculating means and percent distributions.

I then asked students who had made a graph? Not one graph was made. I told them the first rule of statistics is to make a graph. I told them the second rule was to make a graph. I told them the third rule was to make a graph. A student asked what the fourth rule was? I told them the fourth rule was to see rules 1-3.

We talked about appropriate graphs for political leanings. The conclusion was a bar graph or pie graph were appropriate. One student had made a bar graph during the discussion and I was able to use this to highlight characteristics of a bar graph: bars don't touch, order of categories doesn't matter, labeling, and the like.

We then moved to gas price. Students responded that a scatter plot could be made. I pointed out that you need two variables to make a scatter plot and this class focuses on univariate data. Students mentioned making a bar graph. I pointed out that bar graphs are for categorical data. Students mentioned histograms. They then remembered stem and leaf and box plots.

I briefly discussed the characteristics of a histogram: bars touch, the x-axis is a scale where the order matters, the bin sizes must be the same size, there is labeling, and the like.

For homework, I asked students to pick one categorical and one quantitative variable. For the two selected variables they are to create appropriate graphs and to summarize their results.

Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 28

Today we started looking at prime factorization. This is familiar territory for most students but allows connections back to counting ideas and the greatest common divisor while setting the stage for the Euclidean algorithm to come.

The first step is to simply look at factoring some integers with the idea of seeing how many ways an integer can be factored into prime numbers. Students hopefully see that each integer only has one way to be factored into primes. This becomes a conjecture of interest, "There is only one prime factorization for an integer."

I asked students to consider how they could make an argument that would demonstrate the truth of this statement. This led many students to look at factor trees and to go through the idea of testing the conjecture. I pointed out to the class that testing is fine to show it works for specific numbers but that there are an infinite number of integers and how do we know that it will always work?

Students struggle with the idea of proof. What constitutes enough evidence to say something will always be true? The tendency is to check multiple situations and say it appears to work. I believe this is a remnant of how things are verified and checked in previous math classes. Justification is through testing. There isn't much attempt to provide evidence of a generalized nature that statements will always be true.

I provided a proof of prime factorization through contradiction. This proof relies on a lemma that if an integer n = ab and p|n then p|a or p|b. I didn't provide a proof of this lemma to the class but provided examples so they were clear about its meaning and treated this more as an axiom.

The proof I provided followed along the lines of assuming that an integer has two distinct prime factorizations designated by p's and q's. Without loss of generality you can assume there are more q's than p's. Clearly the first prime divides N and therefore must divide one of the q's. Continue repeating this process, exhausting all of the first p that are part of the factorization. Now repeat the process with the second p. Continue until there are no p's left. This results in the p-side equaling one. Since there were assumed to be more q's than p's we now have a series of q's multiplied together that equal one. Since the q's were assumed to be prime numbers this is a contradiction and therefore our assumption of having two different prime factorizations is false.

This is an important result. The knowledge that every integer can be factored into primes in only one way has ramifications throughout number theory and mathematics.

The next question posed is how many factors does any integer have and is there a way to determine this without having to list out all of the factors? Students explored several values: 10, 25, 120, 30, 24, and 33. I asked students to try to make connections between the prime factorization of a number and the number of factors it had. This investigation makes a connection back to the counting that students had been immersed in for the first several weeks of class.

Students made a little headway on this but were reluctant to be factoring values. What they noticed for the values provided were:

  1. The odd numbers had four factors or less
  2. The even numbers had four factors or more
  3. The square number had an odd number of factors and the rest had an even number of factors
I told students they need to look at the first 500 integers and start to group and classify these integers based upon their prime factorization and how many factors they have. This is a homework assignment; I'll see if they find any additional patterns tomorrow.


Visit the class summary for a student's perspective and to view the lesson slides.

Friday, March 1, 2013

IPS - Day 27

Today we wrapped up our probability unit. The focus was to have students create probability models, calculate expected values, and simulate situations.

The class started by considering the game of roulette. What is the expected value of repeatedly placing a $1 bet on the number 29? If 29 comes up then you win $35. If any other number comes up you lose $1. The roulette wheel has 38 values on it, 1-36 plus zero and double-zero.

     E(29) = 35 x 1/38 - 1 x 37/38 = -2/38 or approximately -$0.05.

This represents the average amount of lose. So, if I were to play the game 100 times, on average I would lose approximately $5.00. In total, however, I would have lost $500.

What if you decide to place a bet on black numbers? In this case you win $1 if a black number comes up and you lose $1 otherwise. Students calculated the expected value and were questioning that the result of
-$0.05 was correct. That is the expected value.

I asked the class what if you bet on the red numbers instead? They thought briefly and decided that the expected value shouldn't change.

Suppose someone placed a bet on the last third of the numbers? In this case you win $2 if a number in the range of 25-36 appears and lose $1 otherwise. Students were amazed to see that the expected value in this situation was again -$0.05.

I asked about what would happen with a bet on even or odd numbers or betting a column of 12 numbers? They responded that the result would be -$0.05.

This is a fun but compelling example of expected value. Students start to grasp that no matter how you break down a bet on a roulette table that the expected value will be the same. As I tell my students, "They don't build those large hotels and casinos on generosity."

Next, I had students explore the games of craps, blackjack, and roulette. The later was for those who needed a bit more structure to help them absorb what was going on.

Students looked at these games from a simulation perspective and from a theoretical perspective. It provided the class another opportunity to determine probabilities of events and create simulations.

I concluded class by explaining that we will make use of probability models, expected value, and simulations to understand data that we collect. The concepts learned and skills developed so far will be applied within the context of data that we have collected and are trying to understand.

Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 27

Today we finished looking at the Euler totient function φ(n). There were about a half-dozen students who were out for the previous class, so I had students at their tables go over φ(n) with the missing students. This was an opportunity to catch up people while reinforcing the learning from yesterday.

The next facet of φ(n) to look at was summing the φ(k) for all divisors, k, of n. As an example, if n=15, the divisors of 15 are 1, 3, 5, and 15. Calculate the following: φ(1) + φ(3) + φ(5) + φ(15). It turns out this sum just equals 15, i.e.

     n = φ(1) + φ(2) + ... + φ(n)

I asked students if this would always be true and why. It was interesting to see the approaches different students took. Some students proceeded to calculate the values for more integers and verify that it was working. Some looked at relationships with prime numbers versus composite numbers.

One student asked why we should care if it is always true and that we could worry about it not working when we encountered that situation. I countered with the idea that we want to distinguish between some interesting curiosity versus something that we know is always true and can be built upon. If we don't know that a statement is always true it is just a curiosity. What we are trying to do in this is understand the mathematics that underlies relationships that we see so we can better understand what is happening and why.

The class discussion that followed actually proved to be quite productive on one front. One group had focused on the result for prime numbers. As a result the informally proved that φ(n) = φ(n) + φ(1) when n is prime. This result comes immediately from the fact that φ(1) = 1 and the number of integers less than a prime number p that are relatively prime to p is p - 1. Therefore for any prime, p, we have φ(p) = p - 1 and, therefore, φ(n) = φ(n) + φ(1) when n is prime.

This was an exciting result, at least from the teacher's perspective. It provided a proof for a specific case of the identity we were exploring. The result for composite numbers was not as forthcoming. Students were trying to make a connection between the divisors and relative prime numbers but that's about as far as it went.

We next looked at distinct prime divisors, di, of an integer n and calculated n(1 - 1/d1)(1 - 1/d2)...(1 - 1/dm). For example for n=15 we calculate 15(1 - 1/3)(1 - 1/5). It turns out that n(1 - 1/d1)(1 - 1/d2)...(1 - 1/dm) = φ(n). Several students noticed this connection and others were surprised.

We concluded with a brief discussion that this relationship and the previous relationship were connected. I also revisited that idea that φ(2n) = n. Class time did not allow further pursuit of all of these other than the results are all connected.

Students wrote a brief summary in their notes about connections they saw between prime numbers and relatively primes.


Visit the class summary for a student's perspective and to view the lesson slides.