Tuesday, April 30, 2013

IPS - Day 54

Today students worked on putting together posters to present their chocolate experiment results. At this point, the appear to be doing a good job of setting the purpose, stating their hypotheses, and explaining their procedure. The analysis is still tending to be somewhat superficial from a statistical perspective and the conclusion is not drawing enough from the analysis.

Putting together the posters took the entire period. I have two posters from last semester that I will have students examine at the beginning of class tomorrow. I will then have students score their posters. I will modify their score based upon how accurately they have evaluated themselves. I basically take the difference between their score and my score. If they scored themselves above my score, I will subtract that difference from my score. If the score below my score, I will adjust their score to the average of their score and my score. This provides incentive for students to score their posters accurately but stay on the conservative side.

Besides scoring the posters, we will cover the idea of a bootstrap sample and begin the next investigation project.

Visit the class summary for a student's perspective and to view the lesson slide.

Discrete Math - Day 54

Today we continued our exploration of graphs. The class started with looking at the problems that were not completed during last class. Students were in general agreement on the results for problems 3, 4, and 5.

I asked if it made a difference which vertex you used as a starting point in problem 5. Students thought it probably did not matter. We explored a few variations and demonstrated that the path chosen needed to be considered carefully in order to minimize traversing the same edge more than once.

With number six, students produced a variety of answers. I was looking to see who produced a minimal path. One group found a path of 36 edges, two more found a 35 edge path, and one found a 33 edge path. I asked a student to present a 33 edge path. As the student drew the solution, the realization came that the path could be completed by traversing 32 edges. And, the path started and ended at the same vertex! The students were somewhat amazed at this result.

The next piece was a bit of historical perspective, showing the origins of graph theory with Euler's bridges of Konigsberg problem. Several students attempted to complete a circuit without crossing any bridge twice. A couple of students thought it would not be possible because of the odd number present.

Presenting a problem like this provides students an opportunity to see how a mathematical idea begins and how it evolves. We go from the concrete map of Konigsberg to a more abstract representation with vertices and edges. The representations are equivalent but the abstract representation displays connections and relations in a clearer manner.

Next, we went through a series of definitions. Mathematical definitions are critical in that they establish a foundation from which everyone agrees and builds upon. Definitions were provided for the following terms:

  • Graph
  • Vertex
  • Edge
  • Loop
  • Multiple edges
  • Adjacent vertices
  • Path
  • Circuit
  • Euler path
  • Euler circuit
  • Degree of vertex
  • Odd vertex
  • Even vertex
  • Connected graph
  • Component
  • Cut edge
I was able to use the work that students previously produced to illustrate many of these definitions. For the others, I drew simple graphs that displayed the concept.

At this point I gave students two problems to complete. The first asked students to draw a graph that had 6 vertices, 4 loops, and 2 multiple edges. The second asked students to draw a graph that had 5 vertices, each of degree 4, and contained no loops or multiple edges.

Students worked on the first problem and were discussing and comparing results. As students completed their graphs they were asking if what they had meet the stated requirements. As I told them they did, I could see the sense of triumph in there faces and their reaction to working through the problem. Several students who typically are rather quiet and do not express one way or another how they are doing visibly showed their feeling about successfully completing the task. It was a very cool moment.

Time was running out, so I asked students to tackle the second problem as homework. We will share out solutions to the first problem next class, as there were a variety of solutions. We will also discuss the second problem. Afterward, I will have students work through an additional five problems for next class.


Visit the class summary for a student's perspective and to view the lesson slides.

Monday, April 29, 2013

IPS - Day 53

Today we continued work in the computer lab. The focus today was for students to analyze their data and to create appropriate graphs, summary statistics, and distributional comparisons for the chocolate melting experiment.

To start things off, I asked students what their poster should contain. This was a nice recap of how a statistical analysis is structured and reported. Students pointed out the need to state a purpose; to show the null and alternative hypotheses; to include graphs, tables, and analysis; and to provide a conclusion.

I tried to remind students that the conclusion needs to be based upon the analysis of data. They should also address the probability of seeing results such as theirs.

With that, I let students use the CPMP software to generate graphs, redistributions, and summary statistics. I informed the class that they would be assembling and presenting their posters tomorrow. With that, I let the class work and walked around providing assistance to different groups.

There are still quite a few students who just want to compare means and call the analysis good. I feel that I haven't adequately built the connection between the probability portion of the class and the analysis and conclusion that come from comparing random events and a specific outcome. I am thinking about how to tweak things throughout the semester so that this connection becomes more evident.

I'll know more how much students really understand the nature of statistical analysis when I see their posters tomorrow.

Visit the class summary for a student's perspective and to view the lesson slide.

Discrete Math - Day 53

Today we began exploring graphs. I make liberal use of David Clark's Graphs and Optimization course material which can be found on the Journal of Inquiry-Based Learning in Mathematics web site (jiblm.org).

To start things off, I present students with the mail carrier problem. This problem depicts a network of streets. The scenario is that they are offered a job. The supervisor estimates that the work will take a total of 8 hours, including lunch, breaks, and delivery time. The question is whether or not you should take the job.

Students worked through trying different paths trying to minimize the time spent traversing the street network. The network contains 41 segments and the example provided showed a path requiring a total of 53 segments to traverse every block. As students worked they discussed different options and approaches. Two different students came up with paths that took 49 segments. These were shared with the class. The implication was that it would take approximately 9.5 hours to complete the work. Since the supervisor was willing to pay for 8 hours of work, this was not a good job to accept.

Questions arose as to whether there was a single best path and if unique paths existed. These are sound mathematical questions. I was pleased to see students thinking mathematically about the work.

We then moved on to simpler graphs. Students counted the number of segments and then tried to determine a minimal path through the graph. I used the first example, a simple square, to illustrate what they were to do. Students dove into the assignment and started to discuss the need to double back on certain segments.

We discussed the first problem they worked on. The idea of unique paths was resolved as it was easy to demonstrate there were multiple paths that all had the same length. I asked students to complete the work on the next few graphs. We will work on these and look for characteristics that help to identify when there is a need to double back on a segment.

Visit the class summary for a student's perspective and to view the lesson slides.

Friday, April 26, 2013

IPS - Day 52

Today the focus was on learning to use software to aid in the analysis of data. The focus was on analyzing the chocolate experiment data. The class worked in the computer lab today. I had everyone launch the CPMP software that I saw at the NCTM Annual Conference in Denver, load chocolate data into the spreadsheet, and then look at how to create sampling redistributions to see what random samples might look like if the chocolates melted at the same rate.

Most students seemed to grasp how to use the software. I had everyone save their files and we'll continue to work with the software and analyze results next class. The intent is for the groups to analyze their data while I assist individuals with specifics on how to create statistics, graphs, or general questions about the analysis.

Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 52

Today I wrapped up our foray into number theory and cryptography. There are just a little over two weeks left before our finals and I want to get into some graph theory before the end of the semester.

Class started with looking at the power residues of 10 mod different values. I asked that students select different values of m to use in their groups so that they could see what happens. I asked students to consider using 3, 4, or 5 for values of m.

Although some students were still confused about how to calculate power residues, they were quickly put on track. The results for these values were put on the board:

110100100010000...
311111...
412000...
510000...

Students saw, as with yesterday, that eventually the power residues begin to repeat or revert to zero, in the case where 10 and m have a common factor. I mentioned that these results could be used to derive the division rules that we used earlier in the semester. [This is actually an interesting exploration if there is time to do it.]

From this, I asked the class what a0 mod m would be for any integers a and m? Students recognized that a0 equals 1 and 1 mod m would have a value of 1.

I pointed out that once residues start to repeat a pattern or once you reach a residue value of zero, you become locked into this pattern or your value must always remain zero.

Knowing the behavior of power residues is a tool that Eve can use when trying to unlock the private key that Alice and Bob exchange.

A second tool that Eve uses is Fermat's Little Theorem. Fermat's Little Theorem states

     For any positive integer m and prime number p, with m not divisible by p
     mp-1 ≡ 1 mod p.

An alternative form of the theorem states the congruence as mp ≡ m mod p.

I asked students to pick some values to confirm this worked. This process enables students to make a tangible connection to what appears to be a rather abstract statement.

I brought in Euler's theorem that relates the totient function to a very similar relationship

     For any integers m and n that are relatively prime to each other
     mφ(n) ≡ 1 mod n.

Due to time, I elected to not have students try some values to verify the result. The important aspect of this relationship is that φ(n) < n and so this guarantees that this congruence is satisfied for at least one value.

Fermat's Little Theorem is not bi-directional in nature. If p is prime, we know the congruence is satisfied. In reverse, knowing the congruence is satisfied does not guarantee that p is prime. As an example,

     2340 ≡ 1 mod 341 but 341 = 11 x 31

Values that satisfy Fermat's Little Theorem but are not prime are known as pseudoprimes. Primality tests use these results with the idea that the more values tested that work, the greater the probability that the value is in fact prime.

Armed with these tools, Eve is guaranteed to find power residues that start repeating. At this point Eve can use the point of repetition to crack the code and identify the key being used by Alice and Bob.
To make the key exchange safer, large values are used. When thinking of large values, it is necessary to consider values that are hundreds of digits long.

At this point I asked students for questions and asked them to summarize their learning about cryptography, congruence relations, and modular arithmetic.

There was still some time left in class, so I decided to take a moment to introduce the idea of public key encryption and RSA public keys. Again, this is a topic that can be expanded and investigated over several days, time permitting.

I started off by explaining that we want to conduct a remote transaction in a secure way so that both parties know they are not being cheated. To illustrate the point I pick a student to play a coin flipping game with me. I stand at the opposite side of the room and ask the student to call heads or tails as I flip the coin. I flip several times and, regardless of the outcome, tell the student they guessed wrong. Needless to say, the accusation of cheating arises rather quickly.

I ask the class to consider how can we play this game to ensure that it is played fairly. We need to create a method to allow for an oblivious transfer, one in which neither party knows what the other will do but that one party's action will determine the ultimate success or failure of the other party.

This is illustrated by a lock box with two keys. Bob has a choice of 2 keys and selects one of the keys. Alice has the same two keys. She picks a key to send to Bob. If the sent key matches the one that Bob already has then Bob loses since he is unable to unlock the box. If Alice sends the non-matching key, Bob now possesses both keys, can open the box, and wins.

This is done digitally using prime numbers. Alice selects two prime numbers and sends the product to Bob. Bob attempts to factor the two numbers. He makes a guess and constructs a congruence relation based upon his guess and the product he received. Bob returns the value from his congruence relation to Alice. Alice than finds the two possible values that meet the corresponding congruence relation that she forms. One of the values is Bob's guess and the other is a second value. Alice does not know which is which but must return on of the values to Bob. If Alice returns Bob's guess then Bob loses since he has no additional information to resolve the factoring. If Alice returns the other value, Bob can use congruence relations to find Alice's two factors and he wins.

I walked students through this process with small values. The process is predicated on the idea that extremely large numbers that are the product of two large prime numbers are inherently hard to factor. This is believed to be true by mathematicians but has not been proven. One of the issues here is that if a government or some group were to demonstrate that this actually was not hard as is believed, it would open up that government or group's ability to hack into many diverse systems. Of course, if this were discovered it wouldn't be disclosed since that government or group would have a distinct advantage over everyone else.

With that, the exploration of number theory and cryptography comes to a close. For the final time available in the semester, the class will explore some ideas in graph theory.

Visit the class summary for a student's perspective and to view the lesson slides.

Thursday, April 25, 2013

Discrete Math - Day 51

Today we jumped back into the world of secure data transmissions by looking at the Diffie-Hellman exchange process.

I started class by checking on the status of the cryptology research paper. Students felt good about where they were at on this project. I told them that the report is due on Tuesday and that presentations would take place on Thursday. This allows students to make use of the two Resource periods available to them on Wednesday and Thursday morning to finalize their presentations as a group.

We then looked at the Diffie-Hellman exchange process. I use an activity that I read about in NCTM's Mathematics Teacher. In this activity, two students act as Alice and Bob. They are each given a bag and asked to place colored disks in their bags, not allowing anyone else, including each other, to know what they placed in the bag. The rest of the class acts as hackers waiting on the internet.

The bags represent packets of information. Anyone can make a copy of a packet, they just don't know what the actual contents of the packet might be. Anyone may also add pieces into the bag.

Alice and Bob are located on opposite sides of the classroom. The two bags are passed around the classroom, ultimately with Bob receiving Alice's bag and Alice receiving Bob's bag. Along the way, any group or individual can acquire a copy of either bag; they just are not allowed to look inside the bag.

The challenge is for Alice and Bob to add pieces to their bags so that they will end up with the exact same content. Students with bag copies were allowed to add disks to their bags. The question to the class was how could Alice and Bob end up with the exact same content?

After some thought one student suggested that Alice could replicate the disks she originally put into her bag and place those disks into Bob's bag; Bob could do the same thing, adding his original quantities into Alice's bag. In this way, both Alice and Bob would have the same content in their two bags. As for the copies, unless a student was able to match Bob's content exactly and add it to Alice's copy or to match Alice's content exactly and add it to Bob's copy, they would not have broken the key. When the original content for Alice's and Bob's content were revealed, no one in class had matched the end content.

This established the basic process; it was now time to look at the math behind the process. The allowed me to review our journey to this point. We started by looking at properties of prime numbers. We then looked at data encoding and encryption. This led to the need to understand modulo arithmetic. We studied congruence relations and modulo arithmetic that now come back into play for secure transactions.

I stepped students through a simple example of the math involved in a Diffie-Hellman exchange. The steps are outlined in the image below.


The example used values of g = 5, p = 11, and secret key values of 2, and 3. Students were a bit confused by it all at first but were asking good questions to try to break down the process and why it worked.

I then asked students to break into groups of three and try it for themselves, with one person taking on the role of Alice, one the role of Bob, and the third the role of Eve. Again, some groups were a bit confused in working through the process but after asking questions and receiving some assistance they were able to successfully work through the math.

While this was going on, Eve was to use the public values that were known to try and break the code and uncover the final key. What Eve had to work with were the values for g and p and the two intermediate values exchanged by Alice and Bob. One student was able to crack the code and another was working through the values that would crack the code.

I discussed with the class that knowing the intermediary values and p enable Eve to look at equivalent congruence values for powers of the intermediate values. By doing this, Eve can determine the  resulting key value.

At this point, students were asking if they could try it again. I suggested that they use double-digit values to make things a bit harder for Eve. Students worked through the process again. As they did so, some did not come out with the same key value. When looking at their results, I could see that they either didn't calculate their intermediate values correctly or did not raise the other person's intermediate value to the power of their private key.

Although students were not proficient in the process, they clearly saw how the process was working. I could relate their work back to the bag exchange. The private keys were the disks placed into the bag originally. The intermediate value was the bag being transported. The final key was adding back in disks to make the bag content look the same. This connection helped students feel more comfortable with why the steps were taking place.

I told students that using computers, small values for g and p could quickly be cracked. To ensure safety, larger values are needed. Even with larger values, hackers have tools and math that aid them. Specifically, there are tools that Eve has available to assist in cracking code.

The first has to do with what are called power residues. If we pick a value a and then proceed to raise a to various integral powers, starting with 0 and continuing up we produce a sequence of values. We now divide these values by an integer m. The remainders left over from this division are the residues. Since the residues come from powers of a value, they are called power residues.

I asked students to look at the power residues of 5 mod 3. The sequence of values being examined is 1, 5, 25, 125, ... These values are then divided by 3 and the remainder is captured. The result is the sequence 1, 2, 1, 2,... Students immediately saw the repetitive nature of these residues. The question that came up was would this happen for all values?

I love when this happens! Students are starting to work with the math and making conjectures about what this means, would the pattern recur, or why does this happen? This is thinking like a mathematician, examining relationships and patterns and trying to make sense of what you see and to extend those results.

Since the question was posed, we took a look at the power residues of 10 mod 2. In this case our sequence of values is 1, 10, 100, 1000,... and the remainders are 1, 0, 0, 0,... So we end up with a repeating pattern again.

We'll take closer look at what the implication of this is for Eve in trying to break the code and recover the key that Alice and Bob have exchanged.

Visit the class summary for a student's perspective and to view the lesson slides.

Wednesday, April 24, 2013

IPS - Day 51

After reviewing the work that students produced on their chocolate experiment results and reading through more of the car analysis reports that were turned in, I concluded that students were not getting what statistical analysis was all about. I determined that I needed to refocus attention on why we proceeded in a specific manner with statistical analysis. I also had to break them out of thinking about data analysis as taught in a math class versus a statistics class.

To begin, I looked at the idea of hypotheses and questions of interest. Students hadn't attempted to create hypotheses for the article from last class. I asked students to consider what questions could be posed using the article as a basis.

This led some students to think about the situation and provide questions that could be posed. To encourage others to participate, I asked to hear from someone who had not spoken up yet today. This resulted in a broader group of students who contributed to the discussion.

By and large, the questions students formulated were appropriate, but they all assumed there was a difference in the data. For example, they posed questions about why there was a difference, what caused the difference, or what was the trend in time? All of these are good questions, assuming that a difference actually existed.

I pointed this out. Before we could investigate the questions developed we needed to address the issue of whether or not a difference existed.

The null hypothesis for this becomes

     H0: There is no difference between the percent of private schools and the percent of championships won by private schools.

The alternate hypothesis becomes

     Ha: The percent of championships won by private schools is greater than the percent of private schools.

We discussed that the null hypothesis is a statement that we ultimately hope to prove wrong. I then presented the class with two problems. The problems provided scenarios for which the class needed to develop null and alternative hypotheses.

There were definite struggles as students tried to identify what they wanted to examine and what would be the proposition that they were trying to disprove. I had many conversations around the question of whether the statement used for the null hypothesis should reflect what they wanted to prove (a common tendency) or if that should be the alternative. As groups were winding down their efforts, we shared out thinking as a class.

The first example was actually a two-tail alternative. For the null hypothesis, most groups went for the percent was equal to a specific value. A couple of people wanted to say that it should be greater than or equal to a value. I left these as two options for the null hypothesis and moved on to the alternative hypothesis.

For the alternative hypothesis, some students wanted to use less than, some wanted to use more than, and one wanted to use around. This was a good discussion. I asked students if the problem statement indicated if we were interested in differences in one direction or the other. They concluded that, no, it didn't matter. This meant we could look at differences either greater than or less than the specific value. This is precisely what a two-tail analysis involves, so the appropriate alternative hypothesis is not equal to the value.

For the second problem, students proceeded more confidently. Almost everyone came up with a null hypothesis of equal to a value, while a few looked at greater than or equal to the value. For the alternative, everyone agreed that it should reflect values less than or equal to the null hypothesis value. The problem was a lower, one-tail analysis, so this was correct. I pointed out that traditionally, the null hypothesis used only an equals to value. For a one-tail test such as this, the null hypothesis becomes equivalent to "equals or more".

I then put up the slide that discussed the big idea of statistical inference. I told them that we needed to view data as statisticians and not look at it as taught in math class. In math classes, students are taught to gather their data, calculate means, compare values and be done. I told the class that looking at just the mean is misleading. I pointed out that if my head was placed in a freezer and my butt was placed in a heated oven, then, on average, my temperature is fine. The point is we need to look at the distribution of values that we can expect to see and what random events should look like under our assumptions.

I used the car analysis as a basis and walked students through what we did and why.

  1. We created a hypothesis about the percentage of cars present in the parking lot; this was our null hypothesis.
  2. The default alternative hypothesis would be that the percentages were not the same as we hypothesized.
  3. We ran simulations using our hypothesized values. [I asked students why we did this but they couldn't really articulate the reason.] These values establish what random samples should look like under our null hypothesis.
  4. A histogram of the simulation results show the distribution of what random samples we should expect. The histogram establishes the hypothesized model and can be used to calculate probabilities that specific values or those more extreme would appear.
  5. We then create a sampling plan that will generate a random sample. We calculate a sample statistic from our random sample. The question is now how well does our random sample fit with the model.
  6. The histogram we created from the simulations is used to determine where the sample statistic fits. We can use this to calculate the probability of seeing such an extreme value by looking within our simulation data to see the number of simulation results that equal the sample statistic or are more extreme.
  7. We then draw a conclusion about the null hypothesis; either the sample statistic is consistent with the model and therefore we have no evidence of anything wrong with the null hypothesis or the probability of seeing our sample statistic is so small that we can only conclude that our null hypothesis is not correct.
After going through this and checking for understanding, I told the class that the car analysis report and their chocolate experiment analysis should be following this statistical analysis process and reporting.

We then turned to the chocolate analysis. In this situation, we cannot run simulations since we are dealing with measurement data. There are no random digits that can represent different melting times.

I asked the class what the null hypothesis would be for the chocolate experiment. The response was that the null hypothesis would be that all the chocolates melt at the same rate. The alternative would be that the chocolates melt at different rates.

I asked the class to think about what we could do to get an idea of what random samples should look like. After some discussion in their groups, one student thought we should make use of the null hypothesis. There was some discussion about how this might work. Some groups started to drift off this idea. I told the class I like the idea; we just needed to think about how to make use of the null hypothesis.

The null hypothesis states that all the chocolates melt at the same rate. This means we can treat every chocolate the same. We can put all the times together, mix them up, and then randomly split them into appropriate sized groups. I illustrated this process by writing the first four melting times in our data set for each chocolate type. This gave me 12 values, which I numbered from 1 through 12. I then had students generate random numbers. The first four unique values identified the items for my first chocolate type, the second four unique random values identified my second chocolate group, and the remaining four items made up my third chocolate type.

If we repeat this process over and over, ideally thousands of times, we start to see what melting times for random groups look like. Of course, this is not practical to do without software. Fortunately, at the NCTM annual conference in Denver, I sat in on a presentation where software was demonstrated that can perform this re-sampling technique and it was free. I briefly demonstrated how the software works using the values I had on the board.

I asked students to access the software and play around with it as we will be using it to analyze data from explorations we will make in the last few weeks of class. I posted links to the java program launcher and to the java executable jar file on my web site with instructions on installation.

I wrapped up class by formally introducing the idea of re-sampling and, more specifically, random redistribution. My plan is to move into the computer lab next class and have students use the software to assist in analyzing their chocolate experiment data.

Visit the class summary for a student's perspective and to view the lesson slides.

Monday, April 22, 2013

IPS - Day 50

Today the class started to look at the idea of hypotheses.

First, I collected what work the students had produced for the chocolate experiment. I wanted to see where they were in their analysis and if they were attempting to include the requisite pieces. Since we'll be expanding on the work they conducted so far, I am treating this as a formative assessment.

Next, I showed students a statement about the "Big Idea of Inference":


Is what we observe happening different from what can happen by random chance, under the conditions of what we believe to be true (our assumptions)?

We discussed what this meant. Students asked some clarifying questions. There was a good discussion between two students about what was meant by conditions we believe to be true. Some students related this statement back to their experiences in science class.

With that, we took a look at the 39-game hitting streak investigation that we worked on previously. I asked students what results they had found for the mean hitting streak length. After writing some values on the board, I was able to point out that every time a new sample is generated, the mean for that sample may vary from previous sample means.

I asked students to use a scale of 1-100 with 1 being no confidence and 100 being absolutely certain, how confident were they of seeing a hitting streak of roughly 8 games? The class responded with values the tended to be between 50 and 80.

When asked what length hitting streak would you expect to see, students responded with values concentrated in the 3-15 range. They rated the likelihood of seeing a hitting streak of this length as quite high. Overall, the class responses were consistent with the data they observed.

I presented other questions of interest that could be posed for this scenario. This was to illustrate that one scenario could provide many different looks of the data.

Next, we covered the basics of inference. I pointed out that they were accomplished on some aspects but there were other aspects that we needed to develop. In particular, the class needs to develop an understanding of hypotheses and how to use their analysis to draw a conclusion about their hypotheses.

The next step was for students to attempt to develop a null hypothesis. This required them to think about the assumed values, what they wanted to prove, and what they could disprove. This is a hard concept initially, because the tendency is to create a null hypothesis about what you want to show and the alternative to be what you don't want to happen.

I explained to the class, you either lack data to reject the null hypothesis or you have data that is inconsistent with the null hypothesis, which you then reject. It is akin to someone charged with a crime. Either evidence shows that the person is guilty or the evidence is inconclusive and the person is found not guilty. The verdict of not guilty does not proclaim that someone is innocent, only that there was not enough evidence to demonstrate the person's guilt.

For a first attempt, students actually did quite well with writing their null hypothesis. Some students wanted to keep their hypothesis in the form of a question and others had the null and alternative hypotheses reversed.

We discussed several null hypotheses that were written and then proceeded to look at the alternative hypotheses that were created. At this point, more students were comfortable with creating the opposite statement to their null hypothesis.

I gave the class a second question of interest and this time the creation of null and alternative hypotheses went faster and were much more consistent across the class.

Time was running out. I projected an excerpt from a local newspaper article and asked the class to write a question of interest and to create null and alternative hypotheses for their question. We'll look at what they produced next class.


Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 50

Today we wrapped up working with modular arithmetic. First, though, I checked on the class' progress on the cryptology research paper. After clarifying on a few points, such as the length of the paper and the due date we moved back into modular arithmetic.

I briefly review the mod 3 addition and multiplication tables. I then asked students to construct addition and multiplication tables for mod 7 arithmetic. This took a little longer than I expected but students were asking good questions and comparing their work. As they got going on the work I could see that the tables were really starting to make sense to them.

I projected a couple of student work examples on the board and students asked questions about discrepancies or points of confusion. I pointed out the cyclical nature of the addition tables and I pointed out that in the multiplication tables, for every non-zero column or row, each value from 0-6 only appeared once. I also reviewed that the arithmetic we were seeing possessed all the same properties as working with real numbers. Properties such as the associative property, the commutative property, and the distributive property all worked.

We had been looking at arithmetic modulo a prime number for the reason that this arithmetic forms an algebraic field. All the properties for operations with real numbers also are present with these finite algebraic fields. The next task was to look at what happens when the modulus is a composite number rather than a prime number.

I asked students to construct addition and multiplication tables for a composite number. The slide asked for mod 6 tables but I told the students they could use any composite number. Some created tables for mod 4 and others chose mod 6. At this point students were much more comfortable constructing the tables and they completed the task in a more timely manner.

We looked at sample tables for mod 4 and mod 6. It was readily apparent that the addition tables behaved as before, showing a cyclical nature. I could also point out that one set of diagonals were also cyclical. Everything looked good from the addition perspective.

The multiplication tables posed a different issue. Students now saw repeated patterns in some of the rows and columns. I asked students to consider why this would happen. One student thought it might have to do with the factors of the number. When pressed further, the suggestion was that values like 2,3, and 4 in the mod 6 table had a common factor with 6. This is precisely what is happening.

This allowed me to revisit the cipher function of C(x) = 2x + 5 (mod 26). I asked students if they remembered using this cipher function and finding that different letters were mapped to the same value. Many shook their heads in agreement. The reason this happened is that the cipher function's slope of 2 and 26 have a common factor. So, if we have a cipher function C(x) = Ax + B (mod m), we will not get unique mappings unless gcd(A, m) = 1.

I then revisited Congruence Property 1:

    If a ≡ b (mod m) then ak ≡ bk (mod m) for any integer k.

This is a property that we proved in class. I used this as the basis to argue that if 3 x 2 ≡ 2 then 3 ≡ 1. As I explained in an earlier post, I cheated on this piece as I wanted students to focus on the concept and not get bogged down in details. Today I was able to formally discuss the converse of this congruence property, specifically,


    If ka ≡ kb (mod m) then a ≡ b (mod m) for any integer k, provided gcd(k, m) = 1.


The requirement for gcd(k, m) = 1 comes from the need that m|k(a-b). If k and m are not relatively prime, we cannot guarantee that m|(a-b).

Class closed with students summarizing their thoughts about modular arithmetic in their notes.

Visit the class summary for a student's perspective and to view the lesson slides.

Saturday, April 20, 2013

IPS - Day 49

The class was supposed to work on their chocolate experiment analysis and report. The objective today was to put together a mini-poster and compare their work to other groups. The purpose was to help them improve their reports. I'll find out on Monday how well this went.

Visit the class summary to view the lesson slide.

Discrete Math - Day 49

I was at the NCTM Denver conference today. The class was supposed to be working on their cryptology research paper today in the library. Hopefully that is what they did.

Visit the class summary to see the lesson slide.

NCTM Conference Days 2 and 3

Wow, what a couple of days. Day 2 was a lomg day. It started off with my presentation on discrete math at 8:00 a.m. I wasn't sure how many people would atend and was pleased to see so many people interested in discrete math. We worked through a few problems in number theory with direct connections to cryptography. I really had a fun time and greatly appreciated the enthusiastic participation of those in attendance.

After, I was able to relax and enjoy numerous sessions as a spectator. I focused on  seessions dealing with proof or statistics. The proof sessions tended to focus on justification more then proof; valuable but not a more structured, formal argument that I was looking for.

The statistics sessions were fantastic! I picked up some great investigation activities, some reference books to check out, and ideas on using software and presentations to capture student interest. There was an engaging activity that involved three strands of string, tying knots,looking at the probabilities of outcomes, and then conducting an inference on the results. What I like about this particular activity is that I could use it in my Discrete Math class while we are studying Combinatorics and Discrete Probability. For my Inferential Probability and Statistics class, I can use it have students run simulations to examine expected outcomes and make inferences, and in my AP Stat class, we can work through the probabilities as tree diagrams, look at conditional probabilities, and conduct a hypothesis test. Brilliant investigation.

In the evening there was an AP Statistics panel discussion. Three table leaders discussed scoring and responses from the 2012 exam. The panel reinforced the messages that I have been conveying to my class: determine the correct procedure, check your assumptions and conditions for that procedure, produce understandable work and calculations correctly, and draw a conclusion linked directly to the work that you produced. It was good to hear that I have been emphasizing things that I should; I still wonder how effectively I am sending the message and how well my class is absorbing the message. I'll know soon enough.

So this day was a very long day. I left my house at 6:45 a.m. and did not get home until 9:15 p.m. It was well worth the time but I was tired!

My tiredness showed the next morning. I just didn't want to get out of bed but there was a session I wanted to attend on cryptography that started first thing in the morning. I managed to get going but did arrive to the talk about 10 minutes late. Thankfully I had only missed the introductory piece that gave some background on the origins of what we were going to see. The talk focused on using Excel to create formulas to produce encryption and decryption. I am not sure I would use this in my discrete math class, but it is good to have it in the back of my mind as a possibility. The other piece I thought about was the number correspondence to letters of the alphabet. In the spreadsheet, a value of zero corresponded to a "space". This necessitated the use of mod 27 rather mod 26, which is what the Caesar Cipher activities from Shodor's Interactivate site. By the way, if you haven't been to this site, I recommend you look through the vast amount of activities and applets they provide for all math levels.

I saw the advantage of doing this since students are naturally inclined to use a=>1, b=>2, c=>3,...,z=>26. The issue I always ran into was that once modulo arithmetic was introduced, specifically mod 26, we end up with a zero value. I then had to re-orient students to code a=>0, b=>1, c=>2,...,z=>25. By introducing a space=>0 value, students do not have to re-orient their prior thinking, it provides a smooth transition to modulo arithmetic, and it provides a way to embed spaces into the message naturally rather than artificially.

I also looked at proofs in graph theory. There were several similar problems to those I use in class and a few new ones that I want to use. The proofs or construction of counter-examples were straight-forward. It got me thinking, maybe the sequencing for the discrete math course may be better served using the order of Combinatorics and Discrete Probability, Graph Theory, and then Number Theory and Cryptography. This allows all of the counting techniques to come into play, in fact in the few problems we worked through the possibility of using Gaussian summation, permutations, and the pigeon-hole principle could all come into play.  The proofs are simpler and more straight-forward than those in number theory, and this could provide a more solid foundation in proof that could be pushed forward in the number theory section. I have to think about this but it seems like the right way to go next year.

All-in-all it was a productive experience that was worth the time and effort. I'm glad the conference was in Denver; I'm just not sure I could swing the extra days for travel, especially with the AP Statistics exam looming so close.

Well, that's a wrap for the NCTM National Conference in Denver for 2013. I'm looking forward to the CCTM fall conference. I missed that it wasn't held last fall.


Thursday, April 18, 2013

NCTM Conference Denver - Day 1

Today was the first day of general sessions at the NCTM National Conference in Denver. I attended several worthwhile sessions.

The best was one I decided to just drop in on. The talk was on statistical inference and the use of software that aids in the inference process. I use resampling and bootstrapping in my Inferential Probability and Statistics class and one of the issues I run into is that you need lots of iterations to get good results. The software presented was freely accessible and not only provided these techniques but a wide range of graphing utilities that created comparative graphs. It was perfect and I couldn't believe I stumbled onto this session. Mad props to Core Plus Mathematics Program for making these software tools available for anyone's use.

Another session I sat in on was presented by Daren Starnes. His main message was that the first task students face on the AP Statistics exam is to correctly identify the statistical procedure that is most appropriate for the situation. This was exactly what I had started to emphasize to my class the last two sessions. I felt good about the direction I am trying to take my class. The question is how effectively am I getting the message across. I have three more weeks to cement the message.

I also some several sessions centered around giving students rich problems. I had seen many of these problems or variations of them previously. I enjoyed the discussions we had about student solutions to the problems and ways to push student thinking as they worked through the problem.

One technique that I don't use enough but should is to take more frequent breaks from students working through tasks to have them briefly discuss the strategies they use. This helps get stuck students moving and others to push their thinking even further. This should be a technique I use in Discrete Math as the students work through difficult mathematical concepts.

Tomorrow I'll be presenting the problem-based approach that I use in Discrete Math. There are also several statistics and proof sessions that I plan to attend. The day culminates with an AP Statistics panel discussion and reception in the evening. It's a big day and a long day and one that I am looking forward to.



Discrete Math - Day 48

Today I was at the NCTM National Conference in Denver. The class went to the school library and worked with the librarian on quality source material for a research paper on cryptology.

Visit the class summary for the lesson slides.

Wednesday, April 17, 2013

IPS - Day 48

Today was experiment day. The students carried out their design from last class to test chocolate melting times. As one student said after leaving a chocolate chip on their tongue to melt that "This sounded a lot better than it actually is."

There were no issues that arose while collecting the data. I discussed how this experiment exhibited the properties of a good experimental design: control/compare, randomize, and replicate.

We then discussed analysis. Most students focused on treating each individual column of data (milk, semi-sweet, and dark) and analyzing the results. After no one could offer any alternative way of analyzing the data, I offered the idea that the differences in melting time could be compared and since the semi-sweet was the default standard, the analysis could look at milk minus semi-sweet melting times and dark minus semi-sweet melting times for each student.

As for the analysis, I emphasized that a comparison of the distribution was needed. Focusing solely on the differences in means was not enough. This is typically how students view or are taught how to compare results. As I pointed out, the spread and outliers also need to be considered to make a reasoned assessment.

Each group is to analyze and report their results and conclusions. I reminded students to include the purpose of the analysis and that they need to draw a conclusion. If there is no purpose for the analysis or if you aren't going to draw a conclusion from your analysis, why conduct the analysis?

I will be at a math conference and miss the next class. The groups are to prepare draft posters of their results and compare against work from other groups.

Visit the class summary for a student's perspective and to view the lesson slides.

Tuesday, April 16, 2013

IPS - Day 47

Today we continued looking at the "How Fast Do They Melt in Your Mouth?" experimental study. The focus today was to decide on an experimental design to use.

First, I had students come to a consensus in their groups as to what design structure they wanted to use. I had each group present their design. As each design was presented, students were asked to consider the following questions:

  • Does the design answer the question of interest?
  • Is there control/compare built into the design?
  • Is there randomization built into the design?
  • Is there replication in the design?
This was time well spent as students questioned aspects of the design and modified designs to address the questions.

There were several well-structured designs using a two-sample design and several using a matched-pair design. I then asked the students to decide how they wanted to proceed with conducting the experiment in class. The class was in favor of conducting a matched-pair design; they get more chocolate this way.

I reviewed the issues brought up during the last class as to factors that may influence the results. With those factors and the design structure in mind, I asked students to determine the experiment's procedure.

The decision was to conduct a matched-pair design using treatments of milk, semi-sweet, and dark chocolate chips. The treatment ordered would be randomly assigned to each student. The procedure is:
  • Rinse mouth with water before placing chip on tongue
  • Mouth to remain closed
  • Tongue to remain as still as possible, with no chewing, smashing, etc.
  • Subject will self-time the number of seconds for chip to melt
  • Pieces will start at same temperature (room or chilled)
  • Subject will wait until ready to place chip on tongue before obtaining chip
Students volunteered to bring in the chips. Next time the class will conduct the experiment.


Visit the class summary for a student's perspective and to view the lesson slides.

Discrete Math - Day 47

First, a little mea culpa. In my last entry, I indicated to students that we could use congruence property 1 to show that having 3k1 ≡ 6 (mod 5) means that k1 ≡ 2 (mod 5). In fact, while we proved that if a ≡ b (mod m) then for any integer kka ≡ kb (mod m). We actually are using the converse of this, i.e. if ka ≡ kb (mod m) then a ≡ b (mod m) for any integer k such that gcd(km) = 1, which we did not prove. At this juncture, I wanted to convey the sense of what is happening. The class will see the need for gcd(km) = 1 shortly, in fact they already saw this in one of the encoding schemes where different values were mapped to the same letter.

I started by asking students how things went with trying to solve the Fred's Baseball Card problem using the algorithm from yesterday's class. As expected, many didn't try or simply got stuck. I had decided last night that I wasn't going to press the use of this algorithm. Instead, I used it to point out some key aspects that I felt they should know or be able to do.

With that, I wrote out the congruence statements for the problem and told students I expected them to be able to do this. 

  1. x ≡ 1 (mod 2)
  2. x ≡ 1 (mod 3)
  3. x ≡ 1 (mod 4) 

    Students then wondered about the last condition of even piles when divided by 7. Some wondered if this meant the value was congruent to zero, which is precisely what is happening.

    • x ≡ 0 (mod 7) 


      I told them they should be able to find the linear equation that describes the solution set and to find the slope of this equation using the least common multiple.

      I also said they should be able to perform some of the algebraic substitutions, although I am not expecting them at this point to work completely through an entire system.

      Everyone seemed to agree that they could do this. I then told them I expected them to be able to take a congruence statement and write the equivalent equality expression using the definition of congruence.

      With that said, I focused on modular arithmetic. My goal was to continue pursuing this topic from a different perspective. Students generally find this topic interesting, as was the case today.

      First, I revisited the equivalence relationship properties we had explored a couple of classes ago. It appears that congruence behaves much like equality from an arithmetic perspective. Can this be the case?

      The answer is yes. In fact, modular arithmetic possesses the same properties as integer arithmetic. Modular arithmetic has associative properties of addition and multiplication, a distributive property of multiplication over addition, and commutative properties of addition and multiplication.

      I did not ask students to prove these results here but indicated that a question on the final could ask them to prove one or more of these properties. I did ask them to try these with some values so they could see that the properties held.

      I like to point out that algebra is really the study of structures and trying to understand what entities possess these structures. For example, polynomials and matrices possess many of the same structures as integers. With matrices, multiplication is generally non-commutative. This is another aspect that I like to point out. The commutative property is special.

      I then showed students addition and multiplication tables for arithmetic mod 3.


      + 0 1 2          x 0 1 2
      0 0 1 2          0 0 0 01 1 2 0          1 0 1 22 2 0 1          2 0 2 1
      This took a little bit of time for students to absorb. There were a lot of good questions about the workings of these tables. I pointed out that arithmetic mod 3 actually is more complete than arithmetic with integers because mod 3 arithmetic actually provides multiplicative inverses whereas integer arithmetic does not.

      As we talked through how the tables worked students were able to verify through values that the tables worked. At this point, one student wondered if this idea would work with other values for m. I love when a student anticipates what is coming next.

      I produced the next slide that asked students to create addition and multiplication tables for mod 5 arithmetic. The one student apologized to the class profusely for coming up with the idea. The class started to tackle these tables. There were a few questions and points of hesitation but as the proceeded they started to understand how the tables worked. Some students wanted to just write down the values regardless of how large they were. I would ask what happens when they divided those values by 5. The students would realize they could reduce the result to a value that was less than 5.

      Unfortunately class ended at this point. The class found this an interesting study. We'll continue to look at this idea and what happens if m is not a prime.

      I will be at the NCTM National Conference in Denver for the next two classes. I have students write a research paper on cryptology and will use my time away to have the class work with the school librarian to start researching their topic and putting together their resource materials.


      Visit the class summary for a student's perspective and to view the lesson slides.

      Monday, April 15, 2013

      Discrete Math - Day 46

      Today we continued looking at solving systems of linear congruences.  Class started by my addressing some of the questions students wrote down last class.

      The first question dealt with how many times could the same numbers be congruent. This question indicated some confusion between the idea of a single congruence versus a system of congruence. I tried to clarify this by pointing out that two values either are or are not congruent modulo a third value. In a system of linear congruences, a linear equation describes the solution set and every value of that form fits the criteria, so there are an infinite number of solutions.

      The second question asked about extending transitivity to additional values. This was a good projection from the student and I confirmed that repeatedly applying the third congruence property guarantees that transitivity extends to as many values as you want.

      Another students wondered why do the proofs given actually prove the congruence properties. Looking through the work that students produced, it was evident that they are struggling with proof. As this is the first time that students have had to work with proofs this is understandable.

      I told the class that all a proof was was using known properties and generally accepted techniques combined with a structured, logic argument to reach a conclusion. For the congruence property proofs, we started with a stated definition, used generally accepted algebraic rules for integers to step through to a logical conclusion that the property was true.

      I told the class it was okay that they may be struggling with where to start or how to proceed with a proof. The difficulty is for students to structure their reasoning and logic so that it flows from one step to the next.

      The last question asked how do you solve systems of linear congruences. This was the topic for the lesson, although I may re-think this lesson. I will look at how much this process is actually needed in the work we will undertake.

      The lesson revisited the Chinese remainder problem. The problem was re-written as a system of linear congruences.

      1. x ≡ 2 (mod 3)
      2. x ≡ 3 (mod 5)
      3. x ≡ 2 (mod 7) 

      I asked the class what the first statement told us about the relationship among x, 2, and 3? After some thought, a student said that x - 2 is a multiple of 2. I wrote down x - 2 = 3k1. We can use this to give us x = 3k1 + 2.

      And now this is where things start to get messy.

      Use equation 2 and substitute the expression 3k1 + 2 for x. The result is 3k1 + 2 ≡ 3 (mod 5), or 3k1 ≡ 1 (mod 5), using congruence property 2.

      We have that 6 ≡ 1 (mod 5). Using congruence property 3 we now have 3k1 ≡ 6 (mod 5). Re-writing the value 6 as 2 x 3, we see that 3k1 ≡ 2 x 3 (mod 5). Using congruence property 1 we end up with k1 ≡ 2 (mod 5).

      We repeat the process using the relationship k1 ≡ (mod 5). This means 
      k1 2 = 5k2, or k15k2 + 2 and through substitution

      x = 3k1 + 2 = 3(5k2 + 2) + 2 = 15k2 + 8. Substitute this value into the third expression, so that
      15k2 + 8 ≡ 2 (mod 7) and repeat the process.

      15k2 + 8 ≡ 2 (mod 7) means that 15k2 ≡ -6 (mod 7) but 1 ≡ -6 (mod 7) and ≡ 15 (mod 7). The result is 15k2 ≡ 15 (mod 7). Again, using congruence property 1, we see that k2 ≡ 1 (mod 7). Writing out the result shows that k2 = 7k3 + 1. Substituting for our x expression we know have 
      x = 15k2 + 8 = 15(7k3 + 1 ) + 8 = 105k3 + 8.

      This is the equation that was found when the class first worked on this problem. Needless to say, student heads were spinning as they tried to work through the process and algebraic manipulation. As I told students, for any problem they know how to find the slope for the solution set, since it is simply the least common multiple of the modulus values. The issue is finding the starting value. For this problem, the upper bound is 105 and there are only a few multiples of 7 that need to be checked. For the broken egg problem, a process like this would be quicker.

      I asked students to attempt to use this process on the Fred's Baseball Card problem. I'll see how they fared but I'm thinking that I don't want to belabor this procedure. I know we'll need to look at equivalent congruence values that come up in this process, but it may be that focusing on modular arithmetic and the corresponding addition and multiplication tables may be all that is needed.

      Modular arithmetic is actually the next topic and I think a day spent there will be more productive than spending another day on this process. I may eliminate this piece altogether for the next time.

      Visit the class summary for a student's perspective and to view the lesson slides.

      IPS - Day 46

      Today we looked at using experimental design. This piece is the third leg of working with sample surveys, observational studies, and experiments.

      The experimental investigation comes from "How Fast Do They Melt in Your Mouth?" investigation from NCTM's Navigating through Data Analysis in Grades 9-12 book. I like this investigation because it reinforces vocabulary while allowing students to think about the overall structure of an experiment.

      Today the class worked through the first seven questions and then we discussed their responses. Questions concerned treatments, experimental units, response variables, factors that could affect the experiment's outcome, scope of inference, and causality.

      This was a good discussion and students had to be tuned into the vocabulary in order to answer the questions. It has been a couple of weeks since we actually discussed experimental design, so there were a lot of questions about the vocabulary.

      We then discussed responses as a class. There were some questions about the specifics of the treatments; would milk chocolate and white chocolate chips really fit within the context of the problem statement? How do you capture the melt time?

      As always, as we list out factors that could impact the study, I had to be careful to specify the context as such things as open or closed mouth, mouth temperature, amount of saliva, size of piece, and so on take on a whole new context with high school students. I use this as an opportunity to say that context is important because we don't want any misinterpretations of what the experiment is about.

      The scope of inference was also an interesting discussion. Some students questioned whether the result is valid beyond the individual or beyond the group being tested. Others wondered if the results might be applicable to others in their grade level or perhaps the entire school. Others thought the results could apply to a much broader group.

      For homework, students were asked to design a two-sample and a paired sample design. I briefly described the difference in these designs. The two-sample design consists of two independent groups. Assignment to the groups is random. One group receives on treatment and the other receives the other treatment and then the results from the two groups are compared.

      For the paired sample design, there is a direct connection between the pairs. In this scenario, one person receives both types of chocolate and then their melt times are compared.

      Tomorrow, the groups will come to a consensus for an experimental design and then we'll discuss designs as a class. The goal is to have the class decide on an experimental design which can be carried out by the class.

      Visit the class summary for a student's perspective and to view the lesson slide.

      Friday, April 12, 2013

      IPS - Day 45

      Today we took a look at observational studies and their claims. I used the "What's This Study Do?" investigation from NCTM's Navigating through Data Analysis in Grades 9-12 book.

      This investigation asks students to read through three brief articles centered around cell phone usage and cancer. The investigation asks students to consider the following questions for each study:

      1. What is the question of interest?
      2. What problems might impact the study?
      3. Does the study support the conclusions made?
      4. Are the results of the study applicable to all people?
      I had students answer these questions individually. They then discussed their responses in their groups. I then had students share out and we discussed their answers as a class. This turned into a thoughtful discussion.

      For the first study, students debated whether the question of interest was "Does cell phone usage cause cancer?" or "Is there a difference in cell phone usage between people with and without brain cancer?" After discussion, the class concluded the later was a more appropriate question of interest. Although there wasn't as much discussion around the questions of interest for the other two studies, there were comments that helped to sharpen the focus of the questions.

      For problems that might impact the study, the ideas were far ranging and appropriate. There were considerations about bias, random sample selection, cause and effect. I was pleased to see the points brought up were all of a statistical nature.

      One point that was brought up was on the difference in sample sizes for a couple of studies. Students tend to believe that when comparing two groups that you must have the exact same sample sizes. We discussed this issue at length. I referred students back to our discussions of what constitutes a representative sample. The sample size deals with the level of precision that we can measure but within the needs of the level of precision we seek, the samples themselves can be different sizes. Students took this at face value but seemed uncomfortable with the idea. I'll need to build in an investigation that will make this idea more real and understandable for them.

      Most students did not feel the first two studies, which were observational studies, supported the claims. There was more debate as to the third study, which was an experiment on rats. This was another good debate on the merits of the two types of studies. It also allowed me to revisit experimental design and what conclusions can be drawn from a randomized, comparative experiment.

      The class then briefly discussed some ideas they had on the studies. One person brought up the idea that the experiment should be conducted on humans to be conclusive. This allowed us to revisit the ethics of experiments. Another student brought up the idea that cell phones from different eras could possible have different impacts on cancer development. They came up with an idea to include two cell phone treatments in the rat experiment, one that corresponded to older cell phone design and one that corresponded to newer cell phone design.

      Overall, the discussion clearly showed students were attuned to the issues and we thinking about the questions from a statistical perspective.


      Visit the class summary for a student's perspective and to view the lesson slide.