LPSRUW
QXPS\
DV
QS
LPSRUW
SDQGDV
DV
SG
LPSRUW
NHUDV
IURP
NHUDV
LPSRUW
UHJXODUL]HUVRSWLPL]HUV
IURP
NHUDVOD\HUV
LPSRUW
,QS XW&RQY''HQVH)ODWWHQ$FWLYDWLRQ
8S6DPSOLQJ'0D[3RROLQJ' =HUR3DGGLQJ'
IURP
NHUDVFDOOEDFNV
LPSRUW
0RGHO&KHFNSRLQW7HQVRU%RDUG
IURP
NHUDVPRGHOV
LPSRUW
0RGHOORDGBPRGHO
IURP
NHUDVXWLOV
LPSRUW
WRBFDWHJRULFDO
IURP
VNOHDUQPRGHOBVHOHFWLRQ
LPSRUW
WUDLQBWHVWBVSOLW
IURP
VNOHDUQSUHSURFHVVLQJGDWD
LPSRUW
6WDQGDUG6FDOHU
Figure 7-53. Importing the necessary modules
Chapter 7 temporal Convolutional networks
288
And now you reshape the data sets as shown in Figure
7-55
.
GI> $PRXQW @
6WDQGDUG6FDOHU ILWBWUDQVIRUP GI> $PRXQW @YDOXHVUHVKDSH
GI> 7LPH @
6WDQGDUG6FDOHU ILWBWUDQVIRUP GI> 7LPH @YDOXHVUHVKDSH
DQRPDOLHV GI>GI>&ODVV@ @
QRUPDO GI>GI>&ODVV@ @
IRU
I
LQ
UDQJH
QRUPDO QRUPDOLORF>QSUDQGRPSHUPXWDWLRQ OHQ QRUPDO @
GDWDBVHW SGFRQFDW >QRUPDO>@DQRPDOLHV@
[BWUDLQ[BWHVW WUDLQBWHVWBVSOLW GDWDBVHWWHVWBVL]H
UDQGRPBVWDWH
[BWUDLQ [BWUDLQVRUWBYDOXHV E\ > 7LPH @
[BWHVW [BWHVWVRUWBYDOXHV E\ > 7LPH @
\BWUDLQ [BWUDLQ>&ODVV@
\BWHVW [BWHVW>&ODVV@
Figure 7-54. Using the standard scaler on the columns Time and Amount,
defining the anomaly and normal value data sets, and then defining a new data
set to generate the training and testing sets from. Finally, these sets are sorted in
increasing order of time
[BWUDLQ QSDUUD\ [BWUDLQ UHVKDSH [BWUDLQVKDSH>@
[BWUDLQVKDSH>@
[BWHVW QSDUUD\ [BWHVW UHVKDSH [BWHVWVKDSH>@
[BWHVWVKDSH>@
LQSXWBVKDSH [BWUDLQVKDSH>@
\BWUDLQ NHUDVXWLOVWRBFDWHJRULFDO \BWUDLQ
\BWHVW NHUDVXWLOVWRBFDWHJRULFDO \BWHVW
Figure 7-55. Reshaping the training and testing sets so that they correspond with
the input shape of the model
Chapter 7 temporal Convolutional networks
289
Now that the data preprocessing is done, let’s build the model. This is the encoding
stage (see Figure
7-56
).
LQSXWBOD\HU ,QSXW VKDSH LQSXWBVKDSH
(1&2',1*67$*(
3DLUVRIFDXVDO'FRQYROXWLRQDOOD\HUVDQGSRROLQJOD\HUV
FRPSULVLQJWKHHQFRGLQJVWDJH
FRQYB &RQY' ILOWHUV LQW LQSXWBVKDSH>@ NHUQHOBVL]H
GLODWLRQBUDWH
SDGGLQJ FDXVDO VWULGHV LQSXWBVKDSH LQSXWBVKDSH
NHUQHOBUHJXODUL]HU UHJXODUL]HUVO
DFWLYDWLRQ UHOX LQSXWBOD\HU
SRROB 0D[3RROLQJ' SRROBVL]H VWULGHV FRQYB
FRQYB &RQY' ILOWHUV LQW LQSXWBVKDSH>@ NHUQHOBVL]H
GLODWLRQBUDWH
SDGGLQJ FDXVDO VWULGHV
NHUQHOBUHJXODUL]HU UHJXODUL]HUVO
DFWLYDWLRQ UHOX SRROB
SRROB 0D[3RROLQJ' SRROBVL]H VWULGHV FRQYB
FRQYB &RQY' ILOWHUV LQW LQSXWBVKDSH>@ NHUQHOBVL]H
GLODWLRQBUDWH
SDGGLQJ FDXVDO
VWULGHV NHUQHOBUHJXODUL]HU UHJXODUL]HUVO
DFWLYDWLRQ UHOX SRROB
2873872)(1&2',1*67$*(
HQFRGHU 'HQVH LQW LQSXWBVKDSH>@ DFWLYDWLRQ UHOX FRQYB
Figure 7-56. Defining the code for the encoding stage
Chapter 7 temporal Convolutional networks
290
Following that block is the code for the decoding stage (see Figure
7-57
).
Figure 7-57. Code to define the decoding stage and then the final layer. The model
is then initialized
'(&2',1*67$*(
3DLUVRIXSVDPSOLQJDQGFDXVDO'FRQYROXWLRQDOOD\HUVFRPSULVLQJ
WKHGHFRGLQJVWDJH
XSVDPSOHB 8S6DPSOLQJ' VL]H HQFRGHU
FRQYB &RQY' ILOWHUV LQW LQSXWBVKDSH>@ NHUQHOBVL]H
GLODWLRQBUDWH
SDGGLQJ FDXVDO VWULGHV
NHUQHOBUHJXODUL]HU UHJXODUL]HUVO
DFWLYDWLRQ UHOX XSVDPSOHB
XSVDPSOHB 8S6DPSOLQJ' VL]H FRQYB
FRQYB &RQY' ILOWHUV LQW LQSXWBVKDSH>@ NHUQHOBVL]H
GLODWLRQBUDWH
SDGGLQJ FDXVDO
VWULGHV NHUQHOBUHJXODUL]HU UHJXODUL]HUVO
DFWLYDWLRQ UHOX XSVDPSOHB
]HURBSDGB =HUR3DGGLQJ' SDGGLQJ FRQYB
FRQYB &RQY' ILOWHUV LQW LQSXWBVKDSH>@ NHUQHOBVL]H
GLODWLRQBUDWH
SDGGLQJ FDXVDO
VWULGHV NHUQHOBUHJXODUL]HU UHJXODUL]HUVO
DFWLYDWLRQ UHOX ]HURBSDGB
2XWSXWRIGHFRGLQJVWDJHIODWWHQHGDQGSDVVHGWKURXJKVRIWPD[WR
PDNHSUHGLFWLRQV
IODW )ODWWHQ FRQYB
RXWSXWBOD\HU 'HQVH DFWLYDWLRQ VRIWPD[ IODW
7&1 0RGHO LQSXWV LQSXWBOD\HURXWSXWV RXWSXWBOD\HU
Chapter 7 temporal Convolutional networks
291
Now that the model has been defined, let’s compile it and train it (see Figure
7-58
).
The output should look somewhat like Figure
7-59
.
7&1FRPSLOH ORVV NHUDVORVVHVFDWHJRULFDOBFURVVHQWURS\
RSWLPL]HU RSWLPL]HUV$GDP OU
PHWULFV >DFFXUDF\@
FKHFNSRLQWHU 0RGHO&KHFNSRLQW ILOHSDWK PRGHOB('
7&1BFUHGLWFDUGK
YHUERVH
VDYHBEHVWBRQO\ 7UXH
7&1VXPPDU\
Figure 7-58. Compiling the model, defining the checkpoint callback, and calling
the summary function
Chapter 7 temporal Convolutional networks
292
Notice the addition of the zero padding layer. What this layer does is add a 0 to the
data sequence in order to help the dimensions match. Because the original data had an
odd number of columns, the number of dimensions in the output of the decoder stage
did not match the dimensions of the original data after being upsampled (this is because
of rounding issues, since everything is an integer). To counter this,
zero_pad_1 = ZeroPadding1D(padding=(0,1))(conv_5)
Figure 7-59. The summary of the model. This can help you get an idea of how the
encoding and decoding works by looking at the output shapes of each layer
Chapter 7 temporal Convolutional networks
293
was included, where the tuple is formatted as (left_pad, right_pad) to customize how
the padding should be. Otherwise, passing in an integer will just pad on both ends. To
summarize,
zero padding will add a zero to each entry in the data to the left, right, or
both (default) sides.
With the model compiled, all that’s left for you to do is train the data (see Figure
7- 60
).
After a while, you should end with something like Figure
7-61
.
TCN.fit(x_train, y_train,
batch_size=128,
epochs=25,
verbose=1,
validation_data=(x_test, y_test),
callbacks = [checkpointer])
Do'stlaringiz bilan baham: |