Clan x86
General Forums => Academic / School => Topic started by: Sidoh on March 17, 2011, 04:29:31 PM

Lots of smart people here. This calls for a puzzles thread. I've heard a few new (and old) ones recently, and thought I'd share. Here's how I'm gonna run it:
1. All puzzles will be listed in this post. I'll edit the post to add new ones. Add a post/send a PM and I'll add yours to this post.
2. PM answers, add posts to ask questions.
3. I'll indicate who has solved each of the puzzles underneath it.
Let's do this.
You gather 100 perfect logicians. You tell all of them the following:
 They will be placed in a room with 99 other perfect logicians, in a circle so they can all see eachother
 Their head will be painted a particular color; they cannot see the color of their own head, but everyone else can
 At least one of the hundred's head's will be painted blue
 You will flip the lights on and off
 If they figure out their head is painted blue, they must leave next time the lights go off
 They are not allowed to talk or signal each other in any way. They may only observe the others, and leave if/when they figure out their head is blue.
You proceed to paint all of their heads blue, and begin the exercise. What happens, if anything, and when?
Prove or disprove: you can completely fill a cube with finitely many smaller, alldifferentlysized cubes.
You have 100 pennies. Exactly 50 of them a headsup, but you don't know which ones. If you're blindfolded, how would you divide the pennies up into two groups with an equal number of heads? Repeat the same exercise when exactly 10 of them are headsup. You're not allowed to feel the pennies to test if they're heads up. Say you're only allowed to touch the pennies by gripping them on the edges.
SOLVED BY: dark_drake (http://forum.x86labs.org/index.php?action=profile;u=250)
I came across this problem recently
Given [latex]\{ f_m(\mathbf{x}) \}_{m=1}^M,\ f_m(\mathbf{x})\ :\ X \subseteq \mathbb{R}^n \to \mathbb{R}[/latex] and denote
[latex]
\mathbf{f}(\mathbf{x}) = \begin{bmatrix} f_1(\mathbf{x}) & f_2(\mathbf{x}) & ... & f_M(\mathbf{x}) \end{bmatrix}^T
[/latex]
Under what assumptions is the matrix
[latex]
H = \int_X \mathbf{f}(\mathbf{x}) \mathbf{f}(\mathbf{x})^T d \mathbf{x}
[/latex]
strictly positive definite? Here integration is componentwise.

Why would perfect logicians hang out in a room with heads painted blue?

Why would perfect logicians hang out in a room with heads painted blue?
All perfect logicians hang out in a room with their heads painted blue (or red).

So Blue Man Group is perfect logicians?

So Blue Man Group is perfect logicians?
No, no. The converse is not true.

I object. nslay's "puzzle" is really just advanced math, while the others are cleverly disguised math. I demand he reword the "puzzle" into a word problem involving logicians and hats.

I object. nslay's "puzzle" is really just advanced math, while the others are cleverly disguised math. I demand he reword the "puzzle" into a word problem involving logicians and hats.
It turns out to be very easy if approached the right way. It took me a while to figure out anyway.

You have 100 pennies. Exactly 50 of them a headsup, but you don't know which ones. If you're blindfolded, how would you divide the pennies up into two groups with an equal number of heads? Repeat the same exercise when exactly 10 of them are headsup.
By feeling the pennies. I probably couldn't do it, but my blind friend Kevin probably could easily.
Where is DD's solution? It's early but I don't see it in the thread.

You have 100 pennies. Exactly 50 of them a headsup, but you don't know which ones. If you're blindfolded, how would you divide the pennies up into two groups with an equal number of heads? Repeat the same exercise when exactly 10 of them are headsup.
By feeling the pennies. I probably couldn't do it, but my blind friend Kevin probably could easily.
Where is DD's solution? It's early but I don't see it in the thread.
2. PM answers, add posts to ask questions.
Whats the point of it if we can already see solutions? ;)
And I agree with rabbit and object to nslay's thing being a puzzle, it is just a pure math problem you have to work backwards to get the solution from, given the conditions. Sidoh's original puzzles don't require any complex math, just simple reasoning and deduction to get a solution. In order to solve the math problem you would have to know some advanced math to understand the notations used to describe the problem, not to mention a knowledge of linear algebra to understand matrices and their transpose and properties of them. O yea and integration also, not to mention it just isn't even interesting to solve! Therefore I demand it to be removed thanks :)

You have 100 pennies. Exactly 50 of them a headsup, but you don't know which ones. If you're blindfolded, how would you divide the pennies up into two groups with an equal number of heads? Repeat the same exercise when exactly 10 of them are headsup.
By feeling the pennies. I probably couldn't do it, but my blind friend Kevin probably could easily.
Where is DD's solution? It's early but I don't see it in the thread.
2. PM answers, add posts to ask questions.
Whats the point of it if we can already see solutions? ;)
And I agree with rabbit and object to nslay's thing being a puzzle, it is just a pure math problem you have to work backwards to get the solution from, given the conditions. Sidoh's original puzzles don't require any complex math, just simple reasoning and deduction to get a solution. In order to solve the math problem you would have to know some advanced math to understand the notations used to describe the problem, not to mention a knowledge of linear algebra to understand matrices and their transpose and properties of them. O yea and integration also, not to mention it just isn't even interesting to solve! Therefore I demand it to be removed thanks :)
On the contrary, it is very interesting in the sense of optimization. If you had a Hessian of that form (which I did), positive definiteness would imply strict convexity and therefore a unique global optimum.
The reasoning probably isn't as simple as you think it is. There's a twist and you probably didn't realize it.
EDIT: Oh, I misread. You were talking about Sidoh's puzzle. Nah, this really does boil down to really simple math. You're just intimidated by linear algebra and integration (which really are basic math). But as I said, there is a twist.

You have 100 pennies. Exactly 50 of them a headsup, but you don't know which ones. If you're blindfolded, how would you divide the pennies up into two groups with an equal number of heads? Repeat the same exercise when exactly 10 of them are headsup.
By feeling the pennies. I probably couldn't do it, but my blind friend Kevin probably could easily.
Where is DD's solution? It's early but I don't see it in the thread.
2. PM answers, add posts to ask questions.
Whats the point of it if we can already see solutions? ;)
And I agree with rabbit and object to nslay's thing being a puzzle, it is just a pure math problem you have to work backwards to get the solution from, given the conditions. Sidoh's original puzzles don't require any complex math, just simple reasoning and deduction to get a solution. In order to solve the math problem you would have to know some advanced math to understand the notations used to describe the problem, not to mention a knowledge of linear algebra to understand matrices and their transpose and properties of them. O yea and integration also, not to mention it just isn't even interesting to solve! Therefore I demand it to be removed thanks :)
On the contrary, it is very interesting in the sense of optimization. If you had a Hessian of that form (which I did), positive definiteness would imply strict convexity and therefore a unique global optimum.
The reasoning probably isn't as simple as you think it is. There's a twist and you probably didn't realize it.
EDIT: Oh, I misread. You were talking about Sidoh's puzzle. Nah, this really does boil down to really simple math. You're just intimidated by linear algebra and integration (which really are basic math). But as I said, there is a twist.
Since it contains real values wouldn't f(x)f(x)^{T} be positivesemidefinite? and in order to be positivedefinite f(x) needs to be linearly independent. Probably not correct but thats all I can remember from linear algebra.
I'm not intimidated by math, just that I don't find pure math interesting. Now if you put that into a problem (optimization) that's a different story. And if linear algebra and calculus are considered basic math, then what is addition, subtraction, algebra? Dumbass math?

You have 100 pennies. Exactly 50 of them a headsup, but you don't know which ones. If you're blindfolded, how would you divide the pennies up into two groups with an equal number of heads? Repeat the same exercise when exactly 10 of them are headsup.
By feeling the pennies. I probably couldn't do it, but my blind friend Kevin probably could easily.
Where is DD's solution? It's early but I don't see it in the thread.
2. PM answers, add posts to ask questions.
Whats the point of it if we can already see solutions? ;)
And I agree with rabbit and object to nslay's thing being a puzzle, it is just a pure math problem you have to work backwards to get the solution from, given the conditions. Sidoh's original puzzles don't require any complex math, just simple reasoning and deduction to get a solution. In order to solve the math problem you would have to know some advanced math to understand the notations used to describe the problem, not to mention a knowledge of linear algebra to understand matrices and their transpose and properties of them. O yea and integration also, not to mention it just isn't even interesting to solve! Therefore I demand it to be removed thanks :)
On the contrary, it is very interesting in the sense of optimization. If you had a Hessian of that form (which I did), positive definiteness would imply strict convexity and therefore a unique global optimum.
The reasoning probably isn't as simple as you think it is. There's a twist and you probably didn't realize it.
EDIT: Oh, I misread. You were talking about Sidoh's puzzle. Nah, this really does boil down to really simple math. You're just intimidated by linear algebra and integration (which really are basic math). But as I said, there is a twist.
Since it contains real values wouldn't f(x)f(x)^{T} be positivesemidefinite? and in order to be positivedefinite f(x) needs to be linearly independent. Probably not correct but thats all I can remember from linear algebra.
I'm not intimidated by math, just that I don't find pure math interesting. Now if you put that into a problem (optimization) that's a different story. And if linear algebra and calculus are considered basic math, then what is addition, subtraction, algebra? Dumbass math?
Algebra is one of the most heavilypopulated branches in math. People get PhDs in math studying algebraic topics.
As for the puzzle, I don't really see what harm it's doing. If you don't find it interesting, then don't do it. It's really pretty simple, isn't it?

You have 100 pennies. Exactly 50 of them a headsup, but you don't know which ones. If you're blindfolded, how would you divide the pennies up into two groups with an equal number of heads? Repeat the same exercise when exactly 10 of them are headsup.
By feeling the pennies. I probably couldn't do it, but my blind friend Kevin probably could easily.
Where is DD's solution? It's early but I don't see it in the thread.
2. PM answers, add posts to ask questions.
Whats the point of it if we can already see solutions? ;)
And I agree with rabbit and object to nslay's thing being a puzzle, it is just a pure math problem you have to work backwards to get the solution from, given the conditions. Sidoh's original puzzles don't require any complex math, just simple reasoning and deduction to get a solution. In order to solve the math problem you would have to know some advanced math to understand the notations used to describe the problem, not to mention a knowledge of linear algebra to understand matrices and their transpose and properties of them. O yea and integration also, not to mention it just isn't even interesting to solve! Therefore I demand it to be removed thanks :)
On the contrary, it is very interesting in the sense of optimization. If you had a Hessian of that form (which I did), positive definiteness would imply strict convexity and therefore a unique global optimum.
The reasoning probably isn't as simple as you think it is. There's a twist and you probably didn't realize it.
EDIT: Oh, I misread. You were talking about Sidoh's puzzle. Nah, this really does boil down to really simple math. You're just intimidated by linear algebra and integration (which really are basic math). But as I said, there is a twist.
Since it contains real values wouldn't f(x)f(x)^{T} be positivesemidefinite? and in order to be positivedefinite f(x) needs to be linearly independent. Probably not correct but thats all I can remember from linear algebra.
I'm not intimidated by math, just that I don't find pure math interesting. Now if you put that into a problem (optimization) that's a different story. And if linear algebra and calculus are considered basic math, then what is addition, subtraction, algebra? Dumbass math?
Indeed, you do get positive semidefinite for free. However, you're still only half right about linear independence (and this got me too!).
Here is an example of linearly independent functions where said integral isn't strictly positive definite.
[latex]
f_1(x) = \sin(x) \\
f_2(x) = \begin{cases}
\sin(x) & x \neq 0
1 & x = 0
\end{cases}
[/latex]
So
[latex]
\begin{bmatrix} 1 & 1 \end{bmatrix} \int_{\pi}^{\pi} \mathbf{f}(x) \mathbf{f}(x)^T d x \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & 1 \end{bmatrix} \int_{\pi}^0 \mathbf{f}(x) \mathbf{f}(x)^T dx \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} 1 & 1 \end{bmatrix} \int_0^{\pi} \mathbf{f}(x) \mathbf{f}(x)^T dx \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 0
[/latex]
One has to additionally say that the functions are linearly dependent only on a set with zero measure. I totally ruined it, but that's the twist!
I asked this question because it is both counterintuitive (integral of rank 1 matrix somehow giving full rank matrix result) and it's also deceptively obvious once you do exploit linear independence (because one has to remember the twist).

I'm pretty sure this is The Puzzle Thread not The Math Thread. Knock it off...

I'm pretty sure this is The Puzzle Thread not The Math Thread. Knock it off...
Math puzzles are a subset of puzzles! There's no need to be hostile. If you don't like the math puzzles, don't do them! :D

I'm pretty sure this is The Puzzle Thread not The Math Thread. Knock it off...
I didn't know there was a difference. Do you like your math served up in words, or chopped into numbers?
Maybe that's how we can persuade people to like math ... we can rename the subject to "Puzzles" or "Solving Puzzles"!

Given two lines in 1D of lengths [latex]\ell_1,\ \ell_2[/latex] each centered at [latex]x_1,\ x_2 \in \mathbb{R}[/latex], find the overlap of the lines. For example, if [latex]x_1 = 0,\ x_2 = \frac{1}{2}[/latex] and [latex]\ell_1 = \ell_2 = 1[/latex] then the overlap should be [latex]\frac{1}{2}[/latex].
It has a relatively simple solution, though it does not appear to be as simple to derive as would be expected. Perhaps I'm over thinking it. I want to see if anyone can find an easier derivation.
I'll post my derivation in a day or upon seeing a derivation.
EDIT: Oh yeah, I think I found an easier way. I'll post both solutions.

My original solution:
My idea was to represent each line as a box car function (http://mathworld.wolfram.com/BoxcarFunction.html). The integral of the box car function is the length of it's nonzero portion. Thus, the integral of the product of the two box car functions would be the length of the overlap.
So we have two lines
[latex]
\Pi_{\ell_1/2,\ell_1/2}(x  x_1) = H(x  x_1 + \ell_1/2)  H(xx_1\ell_1/2)
\Pi_{\ell_2/2,\ell_2/2}(x  x_2) = H(x  x_2 + \ell_2/2)  H(xx_2\ell_2/2)
[/latex]
where [latex]H(\cdot)[/latex] is the Heaviside function (http://mathworld.wolfram.com/HeavisideStepFunction.html)
Then, the overlap is given by the integral
[latex]
\text{overlap} = \int_{\infty}^{\infty} \Pi_{\ell_1/2,\ell_1/2}(xx_1) \Pi_{\ell_2/2,\ell_2/2}(xx_2) dx
[/latex]
First note that
[latex]
\frac{d}{dx} \left [ \Pi_{\ell_1/2,\ell_1/2}(xx_1) \right ] = \delta(xx_1+\ell_1/2)  \delta(xx_1\ell_1/2)
\int \Pi_{\ell_2/2,\ell_2/2}(xx_2) dx = (x  x_2 + \ell_2/2) I(x  x_2 + \ell_2/2 > 0)  (x  x_2  \ell_2/2) I(x  x_2  \ell_2/2 > 0)
[/latex]
where [latex]\delta(\cdot)[/latex] denotes the Dirac delta function (http://mathworld.wolfram.com/DeltaFunction.html) and [latex]I(\cdot)[/latex] denotes the indicator function.
By integrating by parts, we can simplify the integral
[latex]
\int_{\infty}^{\infty} \Pi_{\ell_1/2,\ell_1/2}(xx_1) \Pi_{\ell_2/2,\ell_2/2}(xx_2) dx = \left . \left [ \Pi_{\ell_1/2,\ell_1/2}(xx_1) \int \Pi_{\ell_2/2,\ell_2/2}(xx_2) dx \right ] \right _{\infty}^{\infty}
 \int_{\infty}^{\infty} \underbrace{[ (x  x_2 + \ell_2/2) I(x  x_2 + \ell_2/2 > 0)  (x  x_2  \ell_2/2) I(x  x_2  \ell_2/2 > 0) ]}_{\int \Pi_{\ell_2/2,\ell_2/2}(xx_2) dx} \underbrace{[ \delta(xx_1+\ell_1/2)  \delta(xx_1\ell_1/2) ]}_{\frac{d}{dx} \left [ \Pi_{\ell_1/2,\ell_1/2}(xx_1) \right ] } dx
[/latex]
The first term cancels since [latex]\Pi_{\ell_1/2,\ell_1/2}(\pm \inftyx_1) = 0[/latex]. The second term is trivial to compute since
[latex]
\int_{\infty}^{\infty} f(x) \delta(xa) dx = f(a)
[/latex]
This gives
[latex]
\int_{\infty}^{\infty} \Pi_{\ell_1/2,\ell_1/2}(xx_1) \Pi_{\ell_2/2,\ell_2/2}(xx_2) dx =  (x_1  \ell_1/2  x_2 + \ell_2/2) I(x_1  \ell_1/2  x_2 + \ell_2/2 > 0)
+ (x_1  \ell_1/2  x_2  \ell_2/2) I(x_1  \ell_1/2  x_2  \ell_2/2 > 0)
+ (x_1 + \ell_1/2  x_2 + \ell_2/2) I(x_1 + \ell_1/2  x_2 + \ell_2/2 > 0)
 (x_1 + \ell_1/2  x_2  \ell_2/2) I(x_1 + \ell_1/2  x_2  \ell_2/2 > 0)
[/latex]
If we write [latex]\Delta x = x_1  x_2,\ u = \frac{\ell_1+\ell_2}{2},\ v = u  \ell_2[/latex] then we have a more simplified form
[latex]
\text{overlap}(x_1,x_2,\ell_1,\ell_2) =  (\Delta x  v) I(\Delta x  v > 0)
+ (\Delta x  u) I(\Delta x  u > 0)
+ (\Delta x + u) I(\Delta x +u > 0)
 (\Delta x + v) I(\Delta x + v > 0)
[/latex]
It's a nifty looking solution. In C it would be
double x1, x2, l1, l2, dx, u, v, tmp, L;
dx = x1  x2;
u = 0.5*(l1+l2);
v = u  l2;
tmp = dx  v;
L = tmp*(tmp > 0);
tmp = dx  u;
L += tmp*(tmp > 0);
tmp = dx + u;
L += tmp*(tmp > 0);
tmp = dx + v;
L = tmp*(tmp > 0);
One could do a more direct solution by considering the end points of the lines instead of the midpoints. Then it's a lot easier to derive directly and reduces to 4 possible if conditions. That matches the 4 indicator functions the integration gives (indicators are like branchless if statements).
EDIT: Fixed some errors (Forgot to divide by 2 in some places).