Compare commits
No commits in common. "master" and "cat_loop_page_mutli" have entirely different histories.
master
...
cat_loop_p
970
$tutor$
970
$tutor$
@ -1,970 +0,0 @@
|
||||
===============================================================================
|
||||
= W e l c o m e t o t h e V I M T u t o r - Version 1.7 =
|
||||
===============================================================================
|
||||
|
||||
Vim is a very powerful editor that has many commands, too many to
|
||||
explain in a tutor such as this. This tutor is designed to describe
|
||||
enough of the commands that you will be able to easily use Vim as
|
||||
an all-purpose editor.
|
||||
|
||||
The approximate time required to complete the tutor is 25-30 minutes,
|
||||
depending upon how much time is spent with experimentation.
|
||||
|
||||
ATTENTION:
|
||||
The commands in the lessons will modify the text. Make a copy of this
|
||||
file to practice on (if you started "vimtutor" this is already a copy).
|
||||
|
||||
It is important to remember that this tutor is set up to teach by
|
||||
use. That means that you need to execute the commands to learn them
|
||||
properly. If you only read the text, you will forget the commands!
|
||||
|
||||
Now, make sure that your Caps-Lock key is NOT depressed and press
|
||||
the j key enough times to move the cursor so that lesson 1.1
|
||||
completely fills the screen.
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 1.1: MOVING THE CURSOR
|
||||
|
||||
|
||||
** To move the cursor, press the h,j,k,l keys as indicated. **
|
||||
^
|
||||
k Hint: The h key is at the left and moves left.
|
||||
< h l > The l key is at the right and moves right.
|
||||
j The j key looks like a down arrow.
|
||||
v
|
||||
1. Move the cursor around the screen until you are comfortable.
|
||||
|
||||
2. Hold down the down key (j) until it repeats.
|
||||
Now you know how to move to the next lesson.
|
||||
|
||||
3. Using the down key, move to lesson 1.2.
|
||||
|
||||
NOTE: If you are ever unsure about something you typed, press <ESC> to place
|
||||
you in Normal mode. Then retype the command you wanted.
|
||||
|
||||
NOTE: The cursor keys should also work. But using hjkl you will be able to
|
||||
move around much faster, once you get used to it. Really!
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 1.2: EXITING VIM
|
||||
|
||||
|
||||
!! NOTE: Before executing any of the steps below, read this entire lesson!!
|
||||
|
||||
1. Press the <ESC> key (to make sure you are in Normal mode).
|
||||
|
||||
2. Type: :q! <ENTER>.
|
||||
This exits the editor, DISCARDING any changes you have made.
|
||||
|
||||
3. Get back here by executing the command that got you into this tutor. That
|
||||
might be: vimtutor <ENTER>
|
||||
|
||||
4. If you have these steps memorized and are confident, execute steps
|
||||
1 through 3 to exit and re-enter the editor.
|
||||
|
||||
NOTE: :q! <ENTER> discards any changes you made. In a few lessons you
|
||||
will learn how to save the changes to a file.
|
||||
|
||||
5. Move the cursor down to lesson 1.3.
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 1.3: TEXT EDITING - DELETION
|
||||
|
||||
|
||||
** Press x to delete the character under the cursor. **
|
||||
|
||||
1. Move the cursor to the line below marked --->.
|
||||
|
||||
2. To fix the errors, move the cursor until it is on top of the
|
||||
character to be deleted.
|
||||
|
||||
3. Press the x key to delete the unwanted character.
|
||||
|
||||
4. Repeat steps 2 through 4 until the sentence is correct.
|
||||
|
||||
---> The ccow jumpedd ovverr thhe mooon.
|
||||
|
||||
5. Now that the line is correct, go on to lesson 1.4.
|
||||
|
||||
NOTE: As you go through this tutor, do not try to memorize, learn by usage.
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 1.4: TEXT EDITING - INSERTION
|
||||
|
||||
|
||||
** Press i to insert text. **
|
||||
|
||||
1. Move the cursor to the first line below marked --->.
|
||||
|
||||
2. To make the first line the same as the second, move the cursor on top
|
||||
of the character BEFORE which the text is to be inserted.
|
||||
|
||||
3. Press i and type in the necessary additions.
|
||||
|
||||
4. As each error is fixed press <ESC> to return to Normal mode.
|
||||
Repeat steps 2 through 4 to correct the sentence.
|
||||
|
||||
---> There is text misng this .
|
||||
---> There is some text missing from this line.
|
||||
|
||||
5. When you are comfortable inserting text move to lesson 1.5.
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 1.5: TEXT EDITING - APPENDING
|
||||
|
||||
|
||||
** Press A to append text. **
|
||||
|
||||
1. Move the cursor to the first line below marked --->.
|
||||
It does not matter on what character the cursor is in that line.
|
||||
|
||||
2. Press A and type in the necessary additions.
|
||||
|
||||
3. As the text has been appended press <ESC> to return to Normal mode.
|
||||
|
||||
4. Move the cursor to the second line marked ---> and repeat
|
||||
steps 2 and 3 to correct this sentence.
|
||||
|
||||
---> There is some text missing from th
|
||||
There is some text missing from this line.
|
||||
---> There is also some text miss
|
||||
There is also some text missing here.
|
||||
|
||||
5. When you are comfortable appending text move to lesson 1.6.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 1.6: EDITING A FILE
|
||||
|
||||
** Use :wq to save a file and exit. **
|
||||
|
||||
!! NOTE: Before executing any of the steps below, read this entire lesson!!
|
||||
|
||||
1. Exit this tutor as you did in lesson 1.2: :q!
|
||||
Or, if you have access to another terminal, do the following there.
|
||||
|
||||
2. At the shell prompt type this command: vim tutor <ENTER>
|
||||
'vim' is the command to start the Vim editor, 'tutor' is the name of the
|
||||
file you wish to edit. Use a file that may be changed.
|
||||
|
||||
3. Insert and delete text as you learned in the previous lessons.
|
||||
|
||||
4. Save the file with changes and exit Vim with: :wq <ENTER>
|
||||
|
||||
5. If you have quit vimtutor in step 1 restart the vimtutor and move down to
|
||||
the following summary.
|
||||
|
||||
6. After reading the above steps and understanding them: do it.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 1 SUMMARY
|
||||
|
||||
|
||||
1. The cursor is moved using either the arrow keys or the hjkl keys.
|
||||
h (left) j (down) k (up) l (right)
|
||||
|
||||
2. To start Vim from the shell prompt type: vim FILENAME <ENTER>
|
||||
|
||||
3. To exit Vim type: <ESC> :q! <ENTER> to trash all changes.
|
||||
OR type: <ESC> :wq <ENTER> to save the changes.
|
||||
|
||||
4. To delete the character at the cursor type: x
|
||||
|
||||
5. To insert or append text type:
|
||||
i type inserted text <ESC> insert before the cursor
|
||||
A type appended text <ESC> append after the line
|
||||
|
||||
NOTE: Pressing <ESC> will place you in Normal mode or will cancel
|
||||
an unwanted and partially completed command.
|
||||
|
||||
Now continue with lesson 2.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2.1: DELETION COMMANDS
|
||||
|
||||
|
||||
** Type dw to delete a word. **
|
||||
|
||||
1. Press <ESC> to make sure you are in Normal mode.
|
||||
|
||||
2. Move the cursor to the line below marked --->.
|
||||
|
||||
3. Move the cursor to the beginning of a word that needs to be deleted.
|
||||
|
||||
4. Type dw to make the word disappear.
|
||||
|
||||
NOTE: The letter d will appear on the last line of the screen as you type
|
||||
it. Vim is waiting for you to type w . If you see another character
|
||||
than d you typed something wrong; press <ESC> and start over.
|
||||
|
||||
---> There are a some words fun that don't belong paper in this sentence.
|
||||
|
||||
5. Repeat steps 3 and 4 until the sentence is correct and go to lesson 2.2.
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2.2: MORE DELETION COMMANDS
|
||||
|
||||
|
||||
** Type d$ to delete to the end of the line. **
|
||||
|
||||
1. Press <ESC> to make sure you are in Normal mode.
|
||||
|
||||
2. Move the cursor to the line below marked --->.
|
||||
|
||||
3. Move the cursor to the end of the correct line (AFTER the first . ).
|
||||
|
||||
4. Type d$ to delete to the end of the line.
|
||||
|
||||
---> Somebody typed the end of this line twice. end of this line twice.
|
||||
|
||||
|
||||
5. Move on to lesson 2.3 to understand what is happening.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2.3: ON OPERATORS AND MOTIONS
|
||||
|
||||
|
||||
Many commands that change text are made from an operator and a motion.
|
||||
The format for a delete command with the d delete operator is as follows:
|
||||
|
||||
d motion
|
||||
|
||||
Where:
|
||||
d - is the delete operator.
|
||||
motion - is what the operator will operate on (listed below).
|
||||
|
||||
A short list of motions:
|
||||
w - until the start of the next word, EXCLUDING its first character.
|
||||
e - to the end of the current word, INCLUDING the last character.
|
||||
$ - to the end of the line, INCLUDING the last character.
|
||||
|
||||
Thus typing de will delete from the cursor to the end of the word.
|
||||
|
||||
NOTE: Pressing just the motion while in Normal mode without an operator will
|
||||
move the cursor as specified.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2.4: USING A COUNT FOR A MOTION
|
||||
|
||||
|
||||
** Typing a number before a motion repeats it that many times. **
|
||||
|
||||
1. Move the cursor to the start of the line below marked --->.
|
||||
|
||||
2. Type 2w to move the cursor two words forward.
|
||||
|
||||
3. Type 3e to move the cursor to the end of the third word forward.
|
||||
|
||||
4. Type 0 (zero) to move to the start of the line.
|
||||
|
||||
5. Repeat steps 2 and 3 with different numbers.
|
||||
|
||||
---> This is just a line with words you can move around in.
|
||||
|
||||
6. Move on to lesson 2.5.
|
||||
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2.5: USING A COUNT TO DELETE MORE
|
||||
|
||||
|
||||
** Typing a number with an operator repeats it that many times. **
|
||||
|
||||
In the combination of the delete operator and a motion mentioned above you
|
||||
insert a count before the motion to delete more:
|
||||
d number motion
|
||||
|
||||
1. Move the cursor to the first UPPER CASE word in the line marked --->.
|
||||
|
||||
2. Type d2w to delete the two UPPER CASE words.
|
||||
|
||||
3. Repeat steps 1 and 2 with a different count to delete the consecutive
|
||||
UPPER CASE words with one command.
|
||||
|
||||
---> this ABC DE line FGHI JK LMN OP of words is Q RS TUV cleaned up.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2.6: OPERATING ON LINES
|
||||
|
||||
|
||||
** Type dd to delete a whole line. **
|
||||
|
||||
Due to the frequency of whole line deletion, the designers of Vi decided
|
||||
it would be easier to simply type two d's to delete a line.
|
||||
|
||||
1. Move the cursor to the second line in the phrase below.
|
||||
2. Type dd to delete the line.
|
||||
3. Now move to the fourth line.
|
||||
4. Type 2dd to delete two lines.
|
||||
|
||||
---> 1) Roses are red,
|
||||
---> 2) Mud is fun,
|
||||
---> 3) Violets are blue,
|
||||
---> 4) I have a car,
|
||||
---> 5) Clocks tell time,
|
||||
---> 6) Sugar is sweet
|
||||
---> 7) And so are you.
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2.7: THE UNDO COMMAND
|
||||
|
||||
|
||||
** Press u to undo the last commands, U to fix a whole line. **
|
||||
|
||||
1. Move the cursor to the line below marked ---> and place it on the
|
||||
first error.
|
||||
2. Type x to delete the first unwanted character.
|
||||
3. Now type u to undo the last command executed.
|
||||
4. This time fix all the errors on the line using the x command.
|
||||
5. Now type a capital U to return the line to its original state.
|
||||
6. Now type u a few times to undo the U and preceding commands.
|
||||
7. Now type CTRL-R (keeping CTRL key pressed while hitting R) a few times
|
||||
to redo the commands (undo the undo's).
|
||||
|
||||
---> Fiix the errors oon thhis line and reeplace them witth undo.
|
||||
|
||||
8. These are very useful commands. Now move on to the lesson 2 Summary.
|
||||
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 2 SUMMARY
|
||||
|
||||
|
||||
1. To delete from the cursor up to the next word type: dw
|
||||
2. To delete from the cursor to the end of a line type: d$
|
||||
3. To delete a whole line type: dd
|
||||
|
||||
4. To repeat a motion prepend it with a number: 2w
|
||||
5. The format for a change command is:
|
||||
operator [number] motion
|
||||
where:
|
||||
operator - is what to do, such as d for delete
|
||||
[number] - is an optional count to repeat the motion
|
||||
motion - moves over the text to operate on, such as w (word),
|
||||
$ (to the end of line), etc.
|
||||
|
||||
6. To move to the start of the line use a zero: 0
|
||||
|
||||
7. To undo previous actions, type: u (lowercase u)
|
||||
To undo all the changes on a line, type: U (capital U)
|
||||
To undo the undo's, type: CTRL-R
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 3.1: THE PUT COMMAND
|
||||
|
||||
|
||||
** Type p to put previously deleted text after the cursor. **
|
||||
|
||||
1. Move the cursor to the first line below marked --->.
|
||||
|
||||
2. Type dd to delete the line and store it in a Vim register.
|
||||
|
||||
3. Move the cursor to the c) line, ABOVE where the deleted line should go.
|
||||
|
||||
4. Type p to put the line below the cursor.
|
||||
|
||||
5. Repeat steps 2 through 4 to put all the lines in correct order.
|
||||
|
||||
---> d) Can you learn too?
|
||||
---> b) Violets are blue,
|
||||
---> c) Intelligence is learned,
|
||||
---> a) Roses are red,
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 3.2: THE REPLACE COMMAND
|
||||
|
||||
|
||||
** Type rx to replace the character at the cursor with x . **
|
||||
|
||||
1. Move the cursor to the first line below marked --->.
|
||||
|
||||
2. Move the cursor so that it is on top of the first error.
|
||||
|
||||
3. Type r and then the character which should be there.
|
||||
|
||||
4. Repeat steps 2 and 3 until the first line is equal to the second one.
|
||||
|
||||
---> Whan this lime was tuoed in, someone presswd some wrojg keys!
|
||||
---> When this line was typed in, someone pressed some wrong keys!
|
||||
|
||||
5. Now move on to lesson 3.3.
|
||||
|
||||
NOTE: Remember that you should be learning by doing, not memorization.
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 3.3: THE CHANGE OPERATOR
|
||||
|
||||
|
||||
** To change until the end of a word, type ce . **
|
||||
|
||||
1. Move the cursor to the first line below marked --->.
|
||||
|
||||
2. Place the cursor on the u in lubw.
|
||||
|
||||
3. Type ce and the correct word (in this case, type ine ).
|
||||
|
||||
4. Press <ESC> and move to the next character that needs to be changed.
|
||||
|
||||
5. Repeat steps 3 and 4 until the first sentence is the same as the second.
|
||||
|
||||
---> This lubw has a few wptfd that mrrf changing usf the change operator.
|
||||
---> This line has a few words that need changing using the change operator.
|
||||
|
||||
Notice that ce deletes the word and places you in Insert mode.
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 3.4: MORE CHANGES USING c
|
||||
|
||||
|
||||
** The change operator is used with the same motions as delete. **
|
||||
|
||||
1. The change operator works in the same way as delete. The format is:
|
||||
|
||||
c [number] motion
|
||||
|
||||
2. The motions are the same, such as w (word) and $ (end of line).
|
||||
|
||||
3. Move the cursor to the first line below marked --->.
|
||||
|
||||
4. Move the cursor to the first error.
|
||||
|
||||
5. Type c$ and type the rest of the line like the second and press <ESC>.
|
||||
|
||||
---> The end of this line needs some help to make it like the second.
|
||||
---> The end of this line needs to be corrected using the c$ command.
|
||||
|
||||
NOTE: You can use the Backspace key to correct mistakes while typing.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 3 SUMMARY
|
||||
|
||||
|
||||
1. To put back text that has just been deleted, type p . This puts the
|
||||
deleted text AFTER the cursor (if a line was deleted it will go on the
|
||||
line below the cursor).
|
||||
|
||||
2. To replace the character under the cursor, type r and then the
|
||||
character you want to have there.
|
||||
|
||||
3. The change operator allows you to change from the cursor to where the
|
||||
motion takes you. eg. Type ce to change from the cursor to the end of
|
||||
the word, c$ to change to the end of a line.
|
||||
|
||||
4. The format for change is:
|
||||
|
||||
c [number] motion
|
||||
|
||||
Now go on to the next lesson.
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 4.1: CURSOR LOCATION AND FILE STATUS
|
||||
|
||||
** Type CTRL-G to show your location in the file and the file status.
|
||||
Type G to move to a line in the file. **
|
||||
|
||||
NOTE: Read this entire lesson before executing any of the steps!!
|
||||
|
||||
1. Hold down the Ctrl key and press g . We call this CTRL-G.
|
||||
A message will appear at the bottom of the page with the filename and the
|
||||
position in the file. Remember the line number for Step 3.
|
||||
|
||||
NOTE: You may see the cursor position in the lower right corner of the screen
|
||||
This happens when the 'ruler' option is set (see :help 'ruler' )
|
||||
|
||||
2. Press G to move you to the bottom of the file.
|
||||
Type gg to move you to the start of the file.
|
||||
|
||||
3. Type the number of the line you were on and then G . This will
|
||||
return you to the line you were on when you first pressed CTRL-G.
|
||||
|
||||
4. If you feel confident to do this, execute steps 1 through 3.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 4.2: THE SEARCH COMMAND
|
||||
|
||||
|
||||
** Type / followed by a phrase to search for the phrase. **
|
||||
|
||||
1. In Normal mode type the / character. Notice that it and the cursor
|
||||
appear at the bottom of the screen as with the : command.
|
||||
|
||||
2. Now type 'errroor' <ENTER>. This is the word you want to search for.
|
||||
|
||||
3. To search for the same phrase again, simply type n .
|
||||
To search for the same phrase in the opposite direction, type N .
|
||||
|
||||
4. To search for a phrase in the backward direction, use ? instead of / .
|
||||
|
||||
5. To go back to where you came from press CTRL-O (Keep Ctrl down while
|
||||
pressing the letter o). Repeat to go back further. CTRL-I goes forward.
|
||||
|
||||
---> "errroor" is not the way to spell error; errroor is an error.
|
||||
NOTE: When the search reaches the end of the file it will continue at the
|
||||
start, unless the 'wrapscan' option has been reset.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 4.3: MATCHING PARENTHESES SEARCH
|
||||
|
||||
|
||||
** Type % to find a matching ),], or } . **
|
||||
|
||||
1. Place the cursor on any (, [, or { in the line below marked --->.
|
||||
|
||||
2. Now type the % character.
|
||||
|
||||
3. The cursor will move to the matching parenthesis or bracket.
|
||||
|
||||
4. Type % to move the cursor to the other matching bracket.
|
||||
|
||||
5. Move the cursor to another (,),[,],{ or } and see what % does.
|
||||
|
||||
---> This ( is a test line with ('s, ['s ] and {'s } in it. ))
|
||||
|
||||
|
||||
NOTE: This is very useful in debugging a program with unmatched parentheses!
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 4.4: THE SUBSTITUTE COMMAND
|
||||
|
||||
|
||||
** Type :s/old/new/g to substitute 'new' for 'old'. **
|
||||
|
||||
1. Move the cursor to the line below marked --->.
|
||||
|
||||
2. Type :s/thee/the <ENTER> . Note that this command only changes the
|
||||
first occurrence of "thee" in the line.
|
||||
|
||||
3. Now type :s/thee/the/g . Adding the g flag means to substitute
|
||||
globally in the line, change all occurrences of "thee" in the line.
|
||||
|
||||
---> thee best time to see thee flowers is in thee spring.
|
||||
|
||||
4. To change every occurrence of a character string between two lines,
|
||||
type :#,#s/old/new/g where #,# are the line numbers of the range
|
||||
of lines where the substitution is to be done.
|
||||
Type :%s/old/new/g to change every occurrence in the whole file.
|
||||
Type :%s/old/new/gc to find every occurrence in the whole file,
|
||||
with a prompt whether to substitute or not.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 4 SUMMARY
|
||||
|
||||
|
||||
1. CTRL-G displays your location in the file and the file status.
|
||||
G moves to the end of the file.
|
||||
number G moves to that line number.
|
||||
gg moves to the first line.
|
||||
|
||||
2. Typing / followed by a phrase searches FORWARD for the phrase.
|
||||
Typing ? followed by a phrase searches BACKWARD for the phrase.
|
||||
After a search type n to find the next occurrence in the same direction
|
||||
or N to search in the opposite direction.
|
||||
CTRL-O takes you back to older positions, CTRL-I to newer positions.
|
||||
|
||||
3. Typing % while the cursor is on a (,),[,],{, or } goes to its match.
|
||||
|
||||
4. To substitute new for the first old in a line type :s/old/new
|
||||
To substitute new for all 'old's on a line type :s/old/new/g
|
||||
To substitute phrases between two line #'s type :#,#s/old/new/g
|
||||
To substitute all occurrences in the file type :%s/old/new/g
|
||||
To ask for confirmation each time add 'c' :%s/old/new/gc
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 5.1: HOW TO EXECUTE AN EXTERNAL COMMAND
|
||||
|
||||
|
||||
** Type :! followed by an external command to execute that command. **
|
||||
|
||||
1. Type the familiar command : to set the cursor at the bottom of the
|
||||
screen. This allows you to enter a command-line command.
|
||||
|
||||
2. Now type the ! (exclamation point) character. This allows you to
|
||||
execute any external shell command.
|
||||
|
||||
3. As an example type ls following the ! and then hit <ENTER>. This
|
||||
will show you a listing of your directory, just as if you were at the
|
||||
shell prompt. Or use :!dir if ls doesn't work.
|
||||
|
||||
NOTE: It is possible to execute any external command this way, also with
|
||||
arguments.
|
||||
|
||||
NOTE: All : commands must be finished by hitting <ENTER>
|
||||
From here on we will not always mention it.
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 5.2: MORE ON WRITING FILES
|
||||
|
||||
|
||||
** To save the changes made to the text, type :w FILENAME **
|
||||
|
||||
1. Type :!dir or :!ls to get a listing of your directory.
|
||||
You already know you must hit <ENTER> after this.
|
||||
|
||||
2. Choose a filename that does not exist yet, such as TEST.
|
||||
|
||||
3. Now type: :w TEST (where TEST is the filename you chose.)
|
||||
|
||||
4. This saves the whole file (the Vim Tutor) under the name TEST.
|
||||
To verify this, type :!dir or :!ls again to see your directory.
|
||||
|
||||
NOTE: If you were to exit Vim and start it again with vim TEST , the file
|
||||
would be an exact copy of the tutor when you saved it.
|
||||
|
||||
5. Now remove the file by typing (Windows): :!del TEST
|
||||
or (Unix): :!rm TEST
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 5.3: SELECTING TEXT TO WRITE
|
||||
|
||||
|
||||
** To save part of the file, type v motion :w FILENAME **
|
||||
|
||||
1. Move the cursor to this line.
|
||||
|
||||
2. Press v and move the cursor to the fifth item below. Notice that the
|
||||
text is highlighted.
|
||||
|
||||
3. Press the : character. At the bottom of the screen :'<,'> will appear.
|
||||
|
||||
4. Type w TEST , where TEST is a filename that does not exist yet. Verify
|
||||
that you see :'<,'>w TEST before you press <ENTER>.
|
||||
|
||||
5. Vim will write the selected lines to the file TEST. Use :!dir or :!ls
|
||||
to see it. Do not remove it yet! We will use it in the next lesson.
|
||||
|
||||
NOTE: Pressing v starts Visual selection. You can move the cursor around
|
||||
to make the selection bigger or smaller. Then you can use an operator
|
||||
to do something with the text. For example, d deletes the text.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 5.4: RETRIEVING AND MERGING FILES
|
||||
|
||||
|
||||
** To insert the contents of a file, type :r FILENAME **
|
||||
|
||||
1. Place the cursor just above this line.
|
||||
|
||||
NOTE: After executing Step 2 you will see text from lesson 5.3. Then move
|
||||
DOWN to see this lesson again.
|
||||
|
||||
2. Now retrieve your TEST file using the command :r TEST where TEST is
|
||||
the name of the file you used.
|
||||
The file you retrieve is placed below the cursor line.
|
||||
|
||||
3. To verify that a file was retrieved, cursor back and notice that there
|
||||
are now two copies of lesson 5.3, the original and the file version.
|
||||
|
||||
NOTE: You can also read the output of an external command. For example,
|
||||
:r !ls reads the output of the ls command and puts it below the
|
||||
cursor.
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 5 SUMMARY
|
||||
|
||||
|
||||
1. :!command executes an external command.
|
||||
|
||||
Some useful examples are:
|
||||
(Windows) (Unix)
|
||||
:!dir :!ls - shows a directory listing.
|
||||
:!del FILENAME :!rm FILENAME - removes file FILENAME.
|
||||
|
||||
2. :w FILENAME writes the current Vim file to disk with name FILENAME.
|
||||
|
||||
3. v motion :w FILENAME saves the Visually selected lines in file
|
||||
FILENAME.
|
||||
|
||||
4. :r FILENAME retrieves disk file FILENAME and puts it below the
|
||||
cursor position.
|
||||
|
||||
5. :r !dir reads the output of the dir command and puts it below the
|
||||
cursor position.
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 6.1: THE OPEN COMMAND
|
||||
|
||||
|
||||
** Type o to open a line below the cursor and place you in Insert mode. **
|
||||
|
||||
1. Move the cursor to the first line below marked --->.
|
||||
|
||||
2. Type the lowercase letter o to open up a line BELOW the cursor and place
|
||||
you in Insert mode.
|
||||
|
||||
3. Now type some text and press <ESC> to exit Insert mode.
|
||||
|
||||
---> After typing o the cursor is placed on the open line in Insert mode.
|
||||
|
||||
4. To open up a line ABOVE the cursor, simply type a capital O , rather
|
||||
than a lowercase o. Try this on the line below.
|
||||
|
||||
---> Open up a line above this by typing O while the cursor is on this line.
|
||||
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 6.2: THE APPEND COMMAND
|
||||
|
||||
|
||||
** Type a to insert text AFTER the cursor. **
|
||||
|
||||
1. Move the cursor to the start of the first line below marked --->.
|
||||
|
||||
2. Press e until the cursor is on the end of li .
|
||||
|
||||
3. Type an a (lowercase) to append text AFTER the cursor.
|
||||
|
||||
4. Complete the word like the line below it. Press <ESC> to exit Insert
|
||||
mode.
|
||||
|
||||
5. Use e to move to the next incomplete word and repeat steps 3 and 4.
|
||||
|
||||
---> This li will allow you to pract appendi text to a line.
|
||||
---> This line will allow you to practice appending text to a line.
|
||||
|
||||
NOTE: a, i and A all go to the same Insert mode, the only difference is where
|
||||
the characters are inserted.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 6.3: ANOTHER WAY TO REPLACE
|
||||
|
||||
|
||||
** Type a capital R to replace more than one character. **
|
||||
|
||||
1. Move the cursor to the first line below marked --->. Move the cursor to
|
||||
the beginning of the first xxx .
|
||||
|
||||
2. Now press R and type the number below it in the second line, so that it
|
||||
replaces the xxx .
|
||||
|
||||
3. Press <ESC> to leave Replace mode. Notice that the rest of the line
|
||||
remains unmodified.
|
||||
|
||||
4. Repeat the steps to replace the remaining xxx.
|
||||
|
||||
---> Adding 123 to xxx gives you xxx.
|
||||
---> Adding 123 to 456 gives you 579.
|
||||
|
||||
NOTE: Replace mode is like Insert mode, but every typed character deletes an
|
||||
existing character.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 6.4: COPY AND PASTE TEXT
|
||||
|
||||
|
||||
** Use the y operator to copy text and p to paste it **
|
||||
|
||||
1. Move to the line below marked ---> and place the cursor after "a)".
|
||||
|
||||
2. Start Visual mode with v and move the cursor to just before "first".
|
||||
|
||||
3. Type y to yank (copy) the highlighted text.
|
||||
|
||||
4. Move the cursor to the end of the next line: j$
|
||||
|
||||
5. Type p to put (paste) the text. Then type: a second <ESC> .
|
||||
|
||||
6. Use Visual mode to select " item.", yank it with y , move to the end of
|
||||
the next line with j$ and put the text there with p .
|
||||
|
||||
---> a) this is the first item.
|
||||
b)
|
||||
|
||||
NOTE: You can also use y as an operator; yw yanks one word.
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 6.5: SET OPTION
|
||||
|
||||
|
||||
** Set an option so a search or substitute ignores case **
|
||||
|
||||
1. Search for 'ignore' by entering: /ignore <ENTER>
|
||||
Repeat several times by pressing n .
|
||||
|
||||
2. Set the 'ic' (Ignore case) option by entering: :set ic
|
||||
|
||||
3. Now search for 'ignore' again by pressing n
|
||||
Notice that Ignore and IGNORE are now also found.
|
||||
|
||||
4. Set the 'hlsearch' and 'incsearch' options: :set hls is
|
||||
|
||||
5. Now type the search command again and see what happens: /ignore <ENTER>
|
||||
|
||||
6. To disable ignoring case enter: :set noic
|
||||
|
||||
NOTE: To remove the highlighting of matches enter: :nohlsearch
|
||||
NOTE: If you want to ignore case for just one search command, use \c
|
||||
in the phrase: /ignore\c <ENTER>
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 6 SUMMARY
|
||||
|
||||
1. Type o to open a line BELOW the cursor and start Insert mode.
|
||||
Type O to open a line ABOVE the cursor.
|
||||
|
||||
2. Type a to insert text AFTER the cursor.
|
||||
Type A to insert text after the end of the line.
|
||||
|
||||
3. The e command moves to the end of a word.
|
||||
|
||||
4. The y operator yanks (copies) text, p puts (pastes) it.
|
||||
|
||||
5. Typing a capital R enters Replace mode until <ESC> is pressed.
|
||||
|
||||
6. Typing ":set xxx" sets the option "xxx". Some options are:
|
||||
'ic' 'ignorecase' ignore upper/lower case when searching
|
||||
'is' 'incsearch' show partial matches for a search phrase
|
||||
'hls' 'hlsearch' highlight all matching phrases
|
||||
You can either use the long or the short option name.
|
||||
|
||||
7. Prepend "no" to switch an option off: :set noic
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 7.1: GETTING HELP
|
||||
|
||||
|
||||
** Use the on-line help system **
|
||||
|
||||
Vim has a comprehensive on-line help system. To get started, try one of
|
||||
these three:
|
||||
- press the <HELP> key (if you have one)
|
||||
- press the <F1> key (if you have one)
|
||||
- type :help <ENTER>
|
||||
|
||||
Read the text in the help window to find out how the help works.
|
||||
Type CTRL-W CTRL-W to jump from one window to another.
|
||||
Type :q <ENTER> to close the help window.
|
||||
|
||||
You can find help on just about any subject, by giving an argument to the
|
||||
":help" command. Try these (don't forget pressing <ENTER>):
|
||||
|
||||
:help w
|
||||
:help c_CTRL-D
|
||||
:help insert-index
|
||||
:help user-manual
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 7.2: CREATE A STARTUP SCRIPT
|
||||
|
||||
|
||||
** Enable Vim features **
|
||||
|
||||
Vim has many more features than Vi, but most of them are disabled by
|
||||
default. To start using more features you have to create a "vimrc" file.
|
||||
|
||||
1. Start editing the "vimrc" file. This depends on your system:
|
||||
:e ~/.vimrc for Unix
|
||||
:e $VIM/_vimrc for Windows
|
||||
|
||||
2. Now read the example "vimrc" file contents:
|
||||
:r $VIMRUNTIME/vimrc_example.vim
|
||||
|
||||
3. Write the file with:
|
||||
:w
|
||||
|
||||
The next time you start Vim it will use syntax highlighting.
|
||||
You can add all your preferred settings to this "vimrc" file.
|
||||
For more information type :help vimrc-intro
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 7.3: COMPLETION
|
||||
|
||||
|
||||
** Command line completion with CTRL-D and <TAB> **
|
||||
|
||||
1. Make sure Vim is not in compatible mode: :set nocp
|
||||
|
||||
2. Look what files exist in the directory: :!ls or :!dir
|
||||
|
||||
3. Type the start of a command: :e
|
||||
|
||||
4. Press CTRL-D and Vim will show a list of commands that start with "e".
|
||||
|
||||
5. Type d<TAB> and Vim will complete the command name to ":edit".
|
||||
|
||||
6. Now add a space and the start of an existing file name: :edit FIL
|
||||
|
||||
7. Press <TAB>. Vim will complete the name (if it is unique).
|
||||
|
||||
NOTE: Completion works for many commands. Just try pressing CTRL-D and
|
||||
<TAB>. It is especially useful for :help .
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Lesson 7 SUMMARY
|
||||
|
||||
|
||||
1. Type :help or press <F1> or <HELP> to open a help window.
|
||||
|
||||
2. Type :help cmd to find help on cmd .
|
||||
|
||||
3. Type CTRL-W CTRL-W to jump to another window.
|
||||
|
||||
4. Type :q to close the help window.
|
||||
|
||||
5. Create a vimrc startup script to keep your preferred settings.
|
||||
|
||||
6. When typing a : command, press CTRL-D to see possible completions.
|
||||
Press <TAB> to use one completion.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This concludes the Vim Tutor. It was intended to give a brief overview of
|
||||
the Vim editor, just enough to allow you to use the editor fairly easily.
|
||||
It is far from complete as Vim has many many more commands. Read the user
|
||||
manual next: ":help user-manual".
|
||||
|
||||
For further reading and studying, this book is recommended:
|
||||
Vim - Vi Improved - by Steve Oualline
|
||||
Publisher: New Riders
|
||||
The first book completely dedicated to Vim. Especially useful for beginners.
|
||||
There are many examples and pictures.
|
||||
See http://iccf-holland.org/click5.html
|
||||
|
||||
This book is older and more about Vi than Vim, but also recommended:
|
||||
Learning the Vi Editor - by Linda Lamb
|
||||
Publisher: O'Reilly & Associates Inc.
|
||||
It is a good book to get to know almost anything you want to do with Vi.
|
||||
The sixth edition also includes information on Vim.
|
||||
|
||||
This tutorial was written by Michael C. Pierce and Robert K. Ware,
|
||||
Colorado School of Mines using ideas supplied by Charles Smith,
|
||||
Colorado State University. E-mail: bware@mines.colorado.edu.
|
||||
|
||||
Modified for Vim by Bram Moolenaar.
|
||||
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
162
.gitignore
vendored
162
.gitignore
vendored
@ -1,162 +0,0 @@
|
||||
# User-defined
|
||||
# blahblah
|
||||
*.h5
|
||||
*.txt
|
||||
*.csv
|
||||
*.yaml
|
||||
config.py
|
||||
training_images/
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
||||
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
||||
# install all needed dependencies.
|
||||
#Pipfile.lock
|
||||
|
||||
# poetry
|
||||
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
||||
#poetry.lock
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# PyCharm
|
||||
# JetBrains specific template is maintainted in a separate JetBrains.gitignore that can
|
||||
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
#.idea/
|
||||
|
File diff suppressed because one or more lines are too long
@ -1,398 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "572dc7fb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2022-08-01 23:57:09.348119: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"from matplotlib.image import imread\n",
|
||||
"import pandas as pd\n",
|
||||
"from collections import Counter\n",
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
"import re\n",
|
||||
"import tempfile\n",
|
||||
"import numpy as np\n",
|
||||
"from os.path import exists\n",
|
||||
"from imblearn.under_sampling import RandomUnderSampler\n",
|
||||
"from PIL import ImageFile\n",
|
||||
"import sklearn as sk\n",
|
||||
"from sklearn.model_selection import train_test_split, StratifiedShuffleSplit\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import tensorflow.keras\n",
|
||||
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
|
||||
"from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten, Activation\n",
|
||||
"from tensorflow.keras.models import Sequential\n",
|
||||
"from tensorflow.keras.optimizers import Adam\n",
|
||||
"# custom modules\n",
|
||||
"import image_faults\n",
|
||||
"\n",
|
||||
"ImageFile.LOAD_TRUNCATED_IMAGES = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "6ea418cc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def add_regularization(model, regularizer=tf.keras.regularizers.l2(0.0001)):\n",
|
||||
"\n",
|
||||
" if not isinstance(regularizer, tf.keras.regularizers.Regularizer):\n",
|
||||
" print(\"Regularizer must be a subclass of tf.keras.regularizers.Regularizer\")\n",
|
||||
" return model\n",
|
||||
"\n",
|
||||
" for layer in model.layers:\n",
|
||||
" for attr in ['kernel_regularizer']:\n",
|
||||
" if hasattr(layer, attr):\n",
|
||||
" setattr(layer, attr, regularizer)\n",
|
||||
"\n",
|
||||
" # When we change the layers attributes, the change only happens in the model config file\n",
|
||||
" model_json = model.to_json()\n",
|
||||
"\n",
|
||||
" # Save the weights before reloading the model.\n",
|
||||
" tmp_weights_path = os.path.join(tempfile.gettempdir(), 'tmp_weights.h5')\n",
|
||||
" model.save_weights(tmp_weights_path)\n",
|
||||
"\n",
|
||||
" # load the model from the config\n",
|
||||
" model = tf.keras.models.model_from_json(model_json)\n",
|
||||
" \n",
|
||||
" # Reload the model weights\n",
|
||||
" model.load_weights(tmp_weights_path, by_name=True)\n",
|
||||
" return model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "a5c72863",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"image_faults.faulty_images() # removes faulty images\n",
|
||||
"df = pd.read_csv('expanded_class.csv', index_col=[0], low_memory=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1057a442",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def dict_pics_jup():\n",
|
||||
" '''\n",
|
||||
" {source:target} dict used to replace source urls with image location as input\n",
|
||||
" '''\n",
|
||||
" target_dir = os.getcwd() + os.sep + \"training_images\"\n",
|
||||
" with open('temp_pics_source_list.txt') as f:\n",
|
||||
" temp_pics_source_list = json.load(f)\n",
|
||||
" \n",
|
||||
" dict_pics = {}\n",
|
||||
" for k in temp_pics_source_list:\n",
|
||||
" patt_1 = re.search(r'[^/]+(?=/\\$_|.(\\.jpg|\\.jpeg|\\.png))', k, re.IGNORECASE)\n",
|
||||
" patt_2 = re.search(r'(\\.jpg|\\.jpeg|\\.png)', k, re.IGNORECASE)\n",
|
||||
" if patt_1 and patt_2 is not None:\n",
|
||||
" tag = patt_1.group() + patt_2.group().lower()\n",
|
||||
" file_name = target_dir + os.sep + tag\n",
|
||||
" dict_pics.update({k:file_name})\n",
|
||||
" print(\"{source:target} dictionary created @ \" + target_dir)\n",
|
||||
" return dict_pics"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "7a6146e6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"ename": "TypeError",
|
||||
"evalue": "expected string or bytes-like object",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
|
||||
"\u001b[0;32m<ipython-input-5-0009b269209e>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mdict_pics\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdict_pics_jup\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'women_cat_list.txt'\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mf\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mwomen_cats\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mjson\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mload\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'men_cat_list.txt'\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mf\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m<ipython-input-4-4701772f6383>\u001b[0m in \u001b[0;36mdict_pics_jup\u001b[0;34m()\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0mdict_pics\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m{\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mk\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mtemp_pics_source_list\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 11\u001b[0;31m \u001b[0mpatt_1\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mre\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msearch\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mr'[^/]+(?=/\\$_|.(\\.jpg|\\.jpeg|\\.png))'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mk\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mre\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mIGNORECASE\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 12\u001b[0m \u001b[0mpatt_2\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mre\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msearch\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mr'(\\.jpg|\\.jpeg|\\.png)'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mk\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mre\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mIGNORECASE\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mpatt_1\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mpatt_2\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/lib/python3.8/re.py\u001b[0m in \u001b[0;36msearch\u001b[0;34m(pattern, string, flags)\u001b[0m\n\u001b[1;32m 199\u001b[0m \"\"\"Scan through string looking for a match to the pattern, returning\n\u001b[1;32m 200\u001b[0m a Match object, or None if no match was found.\"\"\"\n\u001b[0;32m--> 201\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0m_compile\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpattern\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mflags\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msearch\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstring\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 202\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 203\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0msub\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpattern\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrepl\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstring\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcount\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mflags\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;31mTypeError\u001b[0m: expected string or bytes-like object"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"dict_pics = dict_pics_jup()\n",
|
||||
"\n",
|
||||
"with open('women_cat_list.txt') as f:\n",
|
||||
" women_cats = json.load(f)\n",
|
||||
"with open('men_cat_list.txt') as f:\n",
|
||||
" men_cats = json.load(f)\n",
|
||||
" \n",
|
||||
"with open('temp_pics_source_list.txt') as f:\n",
|
||||
" tempics = json.load(f)\n",
|
||||
"# list of image urls that did not get named properly which will be removed from the dataframe\n",
|
||||
"drop_row_vals = []\n",
|
||||
"for pic in tempics:\n",
|
||||
" try:\n",
|
||||
" dict_pics[pic]\n",
|
||||
" except KeyError:\n",
|
||||
" drop_row_vals.append(pic)\n",
|
||||
"\n",
|
||||
"df['PrimaryCategoryID'] = df['PrimaryCategoryID'].astype(str) # pandas thinks ids are ints\n",
|
||||
"ddf = df[df.PictureURL.isin(drop_row_vals)==False] # remove improperly named image files\n",
|
||||
"df = ddf[ddf.PrimaryCategoryID.isin(men_cats)==False] # removes rows of womens categories\n",
|
||||
"\n",
|
||||
"blah = pd.Series(df.PictureURL)\n",
|
||||
"df = df.drop(labels=['PictureURL'], axis=1)\n",
|
||||
"\n",
|
||||
"blah = blah.apply(lambda x: dict_pics[x])\n",
|
||||
"df = pd.concat([blah, df],axis=1)\n",
|
||||
"df = df.groupby('PrimaryCategoryID').filter(lambda x: len(x)>25) # removes cat outliers\n",
|
||||
"\n",
|
||||
"df=df.sample(frac=1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8a3a86a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"undersample = RandomUnderSampler(sampling_strategy='auto')\n",
|
||||
"train, y_under = undersample.fit_resample(df, df['PrimaryCategoryID'])\n",
|
||||
"print(Counter(train['PrimaryCategoryID']))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "506aa5cf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"train, test = train_test_split(train, test_size=0.2, random_state=42)\n",
|
||||
"# stratify=train['PrimaryCategoryID']\n",
|
||||
"# train['PrimaryCategoryID'].value_counts()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4d72eb90",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"datagen = ImageDataGenerator(rescale=1./255., \n",
|
||||
" validation_split=.2,\n",
|
||||
" #samplewise_std_normalization=True,\n",
|
||||
" #horizontal_flip= True,\n",
|
||||
" #vertical_flip= True,\n",
|
||||
" #width_shift_range= 0.2,\n",
|
||||
" #height_shift_range= 0.2,\n",
|
||||
" #rotation_range= 90,\n",
|
||||
" preprocessing_function=tf.keras.applications.vgg16.preprocess_input)\n",
|
||||
"train_generator=datagen.flow_from_dataframe(\n",
|
||||
" dataframe=train[:len(train)],\n",
|
||||
" directory='./training_images',\n",
|
||||
" x_col='PictureURL',\n",
|
||||
" y_col='PrimaryCategoryID',\n",
|
||||
" batch_size=32,\n",
|
||||
" seed=42,\n",
|
||||
" shuffle=True,\n",
|
||||
" target_size=(244,244),\n",
|
||||
" subset='training'\n",
|
||||
" )\n",
|
||||
"validation_generator=datagen.flow_from_dataframe(\n",
|
||||
" dataframe=train[:len(train)], # is using train right?\n",
|
||||
" directory='./training_images',\n",
|
||||
" x_col='PictureURL',\n",
|
||||
" y_col='PrimaryCategoryID',\n",
|
||||
" batch_size=32,\n",
|
||||
" seed=42,\n",
|
||||
" shuffle=True,\n",
|
||||
" target_size=(244,244),\n",
|
||||
" subset='validation'\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7b70f37f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"imgs, labels = next(train_generator)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1ed54bf5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def plotImages(images_arr):\n",
|
||||
" fig, axes = plt.subplots(1, 10, figsize=(20,20))\n",
|
||||
" axes = axes.flatten()\n",
|
||||
" for img, ax in zip( images_arr, axes):\n",
|
||||
" ax.imshow(img)\n",
|
||||
" ax.axis('off')\n",
|
||||
" plt.tight_layout()\n",
|
||||
" plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "85934565",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#plotImages(imgs)\n",
|
||||
"#print(labels)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6322bcad",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"physical_devices = tf.config.list_physical_devices('GPU')\n",
|
||||
"print(len(physical_devices))\n",
|
||||
"tf.config.experimental.set_memory_growth(physical_devices[0], True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b31af79e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fe06f2bf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#model = Sequential()\n",
|
||||
"#for layer in base_model.layers[:-1]:\n",
|
||||
"# model.add(layer)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7d3cc82c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# loop through layers, add Dropout after layers 'fc1' and 'fc2'\n",
|
||||
"updated_model = Sequential()\n",
|
||||
"for layer in base_model.layers[:-1]:\n",
|
||||
" updated_model.add(layer)\n",
|
||||
" if layer.name in ['fc1', 'fc2']:\n",
|
||||
" updated_model.add(Dropout(.50))\n",
|
||||
"\n",
|
||||
"model = updated_model\n",
|
||||
"\n",
|
||||
"for layer in model.layers:\n",
|
||||
" layer.trainable = True\n",
|
||||
"\n",
|
||||
"model.add(Dense(units=7, activation='softmax'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c774d787",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#model = add_regularization(model)\n",
|
||||
"model.summary()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fd5d1246",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model.compile(optimizer=Adam(learning_rate=.0001), loss='categorical_crossentropy',\n",
|
||||
" metrics=['accuracy'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9cd2ba27",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model.fit(x=train_generator,\n",
|
||||
" steps_per_epoch=len(train_generator),\n",
|
||||
" validation_data=validation_generator,\n",
|
||||
" validation_steps=len(validation_generator),\n",
|
||||
" epochs=30,\n",
|
||||
" verbose=1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "63f791af",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,522 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "572dc7fb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"from matplotlib.image import imread\n",
|
||||
"import pandas as pd\n",
|
||||
"from collections import Counter\n",
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
"import re\n",
|
||||
"import tempfile\n",
|
||||
"import numpy as np\n",
|
||||
"from os.path import exists\n",
|
||||
"from imblearn.under_sampling import RandomUnderSampler\n",
|
||||
"from PIL import ImageFile\n",
|
||||
"import sklearn as sk\n",
|
||||
"from sklearn.model_selection import train_test_split, StratifiedShuffleSplit\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import tensorflow.keras\n",
|
||||
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
|
||||
"from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten, Activation\n",
|
||||
"from tensorflow.keras.models import Sequential\n",
|
||||
"from tensorflow.keras.optimizers import Adam\n",
|
||||
"# custom modules\n",
|
||||
"import image_faults\n",
|
||||
"\n",
|
||||
"ImageFile.LOAD_TRUNCATED_IMAGES = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def add_regularization(model, regularizer=tf.keras.regularizers.l2(0.0001)):\n",
|
||||
"\n",
|
||||
" if not isinstance(regularizer, tf.keras.regularizers.Regularizer):\n",
|
||||
" print(\"Regularizer must be a subclass of tf.keras.regularizers.Regularizer\")\n",
|
||||
" return model\n",
|
||||
"\n",
|
||||
" for layer in model.layers:\n",
|
||||
" for attr in ['kernel_regularizer']:\n",
|
||||
" if hasattr(layer, attr):\n",
|
||||
" setattr(layer, attr, regularizer)\n",
|
||||
"\n",
|
||||
" # When we change the layers attributes, the change only happens in the model config file\n",
|
||||
" model_json = model.to_json()\n",
|
||||
"\n",
|
||||
" # Save the weights before reloading the model.\n",
|
||||
" tmp_weights_path = os.path.join(tempfile.gettempdir(), 'tmp_weights.h5')\n",
|
||||
" model.save_weights(tmp_weights_path)\n",
|
||||
"\n",
|
||||
" # load the model from the config\n",
|
||||
" model = tf.keras.models.model_from_json(model_json)\n",
|
||||
" \n",
|
||||
" # Reload the model weights\n",
|
||||
" model.load_weights(tmp_weights_path, by_name=True)\n",
|
||||
" return model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "a5c72863",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# image_faults.faulty_images() # removes faulty images\n",
|
||||
"df = pd.read_csv('expanded_class.csv', index_col=[0], low_memory=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1057a442",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{source:target} dictionary created @ /tf/training_images\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def dict_pics_jup():\n",
|
||||
" '''\n",
|
||||
" {source:target} dict used to replace source urls with image location as input\n",
|
||||
" '''\n",
|
||||
" target_dir = os.getcwd() + os.sep + \"training_images\"\n",
|
||||
" with open('temp_pics_source_list.txt') as f:\n",
|
||||
" temp_pics_source_list = json.load(f)\n",
|
||||
" \n",
|
||||
" dict_pics = {}\n",
|
||||
" for k in temp_pics_source_list:\n",
|
||||
" patt_1 = re.search(r'[^/]+(?=/\\$_|.(\\.jpg|\\.jpeg|\\.png))', k, re.IGNORECASE)\n",
|
||||
" patt_2 = re.search(r'(\\.jpg|\\.jpeg|\\.png)', k, re.IGNORECASE)\n",
|
||||
" if patt_1 and patt_2 is not None:\n",
|
||||
" tag = patt_1.group() + patt_2.group().lower()\n",
|
||||
" file_name = target_dir + os.sep + tag\n",
|
||||
" dict_pics.update({k:file_name})\n",
|
||||
" print(\"{source:target} dictionary created @ \" + target_dir)\n",
|
||||
" return dict_pics"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "7a6146e6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"dict_pics = dict_pics_jup()\n",
|
||||
"\n",
|
||||
"with open('women_cat_list.txt') as f:\n",
|
||||
" women_cats = json.load(f)\n",
|
||||
"with open('men_cat_list.txt') as f:\n",
|
||||
" men_cats = json.load(f)\n",
|
||||
" \n",
|
||||
"with open('temp_pics_source_list.txt') as f:\n",
|
||||
" tempics = json.load(f)\n",
|
||||
"# list of image urls that did not get named properly which will be removed from the dataframe\n",
|
||||
"drop_row_vals = []\n",
|
||||
"for pic in tempics:\n",
|
||||
" try:\n",
|
||||
" dict_pics[pic]\n",
|
||||
" except KeyError:\n",
|
||||
" drop_row_vals.append(pic)\n",
|
||||
"\n",
|
||||
"df['PrimaryCategoryID'] = df['PrimaryCategoryID'].astype(str) # pandas thinks ids are ints\n",
|
||||
"ddf = df[df.PictureURL.isin(drop_row_vals)==False] # remove improperly named image files\n",
|
||||
"df = ddf[ddf.PrimaryCategoryID.isin(men_cats)==False] # removes rows of womens categories\n",
|
||||
"\n",
|
||||
"blah = pd.Series(df.PictureURL)\n",
|
||||
"df = df.drop(labels=['PictureURL'], axis=1)\n",
|
||||
"\n",
|
||||
"blah = blah.apply(lambda x: dict_pics[x])\n",
|
||||
"df = pd.concat([blah, df],axis=1)\n",
|
||||
"df = df.groupby('PrimaryCategoryID').filter(lambda x: len(x)>25) # removes cat outliers\n",
|
||||
"\n",
|
||||
"df=df.sample(frac=1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Counter({'11498': 3913, '11504': 3913, '11505': 3913, '11632': 3913, '15709': 3913, '24087': 3913, '45333': 3913, '53120': 3913, '53548': 3913, '53557': 3913, '55793': 3913, '62107': 3913, '95672': 3913})\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"undersample = RandomUnderSampler(sampling_strategy='auto')\n",
|
||||
"train, y_under = undersample.fit_resample(df, df['PrimaryCategoryID'])\n",
|
||||
"#print(Counter(train['PrimaryCategoryID']))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "506aa5cf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"train, test = train_test_split(train, test_size=0.2, random_state=42)\n",
|
||||
"# stratify=train['PrimaryCategoryID']\n",
|
||||
"# train['PrimaryCategoryID'].value_counts()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "4d72eb90",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found 32547 validated image filenames belonging to 13 classes.\n",
|
||||
"Found 8136 validated image filenames belonging to 13 classes.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/usr/local/lib/python3.8/dist-packages/keras_preprocessing/image/dataframe_iterator.py:279: UserWarning: Found 12 invalid image filename(s) in x_col=\"PictureURL\". These filename(s) will be ignored.\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"datagen = ImageDataGenerator(rescale=1./255., \n",
|
||||
" validation_split=.2,\n",
|
||||
" #samplewise_std_normalization=True,\n",
|
||||
" #horizontal_flip= True,\n",
|
||||
" #vertical_flip= True,\n",
|
||||
" #width_shift_range= 0.2,\n",
|
||||
" #height_shift_range= 0.2,\n",
|
||||
" #rotation_range= 90,\n",
|
||||
" preprocessing_function=tf.keras.applications.vgg16.preprocess_input)\n",
|
||||
"train_generator=datagen.flow_from_dataframe(\n",
|
||||
" dataframe=train[:len(train)],\n",
|
||||
" directory='./training_images',\n",
|
||||
" x_col='PictureURL',\n",
|
||||
" y_col='PrimaryCategoryID',\n",
|
||||
" batch_size=32,\n",
|
||||
" seed=42,\n",
|
||||
" shuffle=True,\n",
|
||||
" target_size=(244,244),\n",
|
||||
" subset='training'\n",
|
||||
" )\n",
|
||||
"validation_generator=datagen.flow_from_dataframe(\n",
|
||||
" dataframe=train[:len(train)], # is using train right?\n",
|
||||
" directory='./training_images',\n",
|
||||
" x_col='PictureURL',\n",
|
||||
" y_col='PrimaryCategoryID',\n",
|
||||
" batch_size=32,\n",
|
||||
" seed=42,\n",
|
||||
" shuffle=True,\n",
|
||||
" target_size=(244,244),\n",
|
||||
" subset='validation'\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "7b70f37f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"imgs, labels = next(train_generator)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "1ed54bf5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def plotImages(images_arr):\n",
|
||||
" fig, axes = plt.subplots(1, 10, figsize=(20,20))\n",
|
||||
" axes = axes.flatten()\n",
|
||||
" for img, ax in zip( images_arr, axes):\n",
|
||||
" ax.imshow(img)\n",
|
||||
" ax.axis('off')\n",
|
||||
" plt.tight_layout()\n",
|
||||
" plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "85934565",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#plotImages(imgs)\n",
|
||||
"#print(labels)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "6322bcad",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"physical_devices = tf.config.list_physical_devices('GPU')\n",
|
||||
"print(len(physical_devices))\n",
|
||||
"tf.config.experimental.set_memory_growth(physical_devices[0], True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "b31af79e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"base_model = tf.keras.applications.vgg19.VGG19(weights='imagenet')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "fe06f2bf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = Sequential()\n",
|
||||
"for layer in base_model.layers[:-1]:\n",
|
||||
" model.add(layer)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "7d3cc82c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for layer in model.layers:\n",
|
||||
" layer.trainable = True\n",
|
||||
" \n",
|
||||
"#model.add(Dropout(.5))\n",
|
||||
"#model.add(Dense(64, activation='softmax'))\n",
|
||||
"# model.add(Dropout(.25))\n",
|
||||
"\n",
|
||||
"model.add(Dense(units=7, activation='softmax'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "c774d787",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Model: \"sequential\"\n",
|
||||
"_________________________________________________________________\n",
|
||||
" Layer (type) Output Shape Param # \n",
|
||||
"=================================================================\n",
|
||||
" block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n",
|
||||
" \n",
|
||||
" block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n",
|
||||
" \n",
|
||||
" block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n",
|
||||
" \n",
|
||||
" block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n",
|
||||
" \n",
|
||||
" block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n",
|
||||
" \n",
|
||||
" block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n",
|
||||
" \n",
|
||||
" block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n",
|
||||
" \n",
|
||||
" block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n",
|
||||
" \n",
|
||||
" block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n",
|
||||
" \n",
|
||||
" block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n",
|
||||
" \n",
|
||||
" block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n",
|
||||
" \n",
|
||||
" block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n",
|
||||
" \n",
|
||||
" block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n",
|
||||
" \n",
|
||||
" block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n",
|
||||
" \n",
|
||||
" block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n",
|
||||
" \n",
|
||||
" block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n",
|
||||
" \n",
|
||||
" block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n",
|
||||
" \n",
|
||||
" block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n",
|
||||
" \n",
|
||||
" flatten (Flatten) (None, 25088) 0 \n",
|
||||
" \n",
|
||||
" fc1 (Dense) (None, 4096) 102764544 \n",
|
||||
" \n",
|
||||
" fc2 (Dense) (None, 4096) 16781312 \n",
|
||||
" \n",
|
||||
" dense (Dense) (None, 13) 53261 \n",
|
||||
" \n",
|
||||
"=================================================================\n",
|
||||
"Total params: 134,313,805\n",
|
||||
"Trainable params: 134,313,805\n",
|
||||
"Non-trainable params: 0\n",
|
||||
"_________________________________________________________________\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"#model = add_regularization(model)\n",
|
||||
"model.summary()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "fd5d1246",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model.compile(optimizer=Adam(learning_rate=.0001), loss='categorical_crossentropy',\n",
|
||||
" metrics=['accuracy'])\n",
|
||||
"# sparse_categorical_crossentropy"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "9cd2ba27",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Epoch 1/30\n",
|
||||
"1018/1018 [==============================] - 391s 379ms/step - loss: 2.4329 - accuracy: 0.1571 - val_loss: 2.1902 - val_accuracy: 0.2525\n",
|
||||
"Epoch 2/30\n",
|
||||
"1018/1018 [==============================] - 379s 373ms/step - loss: 2.0578 - accuracy: 0.2960 - val_loss: 1.9160 - val_accuracy: 0.3330\n",
|
||||
"Epoch 3/30\n",
|
||||
"1018/1018 [==============================] - 381s 374ms/step - loss: 1.8221 - accuracy: 0.3681 - val_loss: 1.8588 - val_accuracy: 0.3535\n",
|
||||
"Epoch 4/30\n",
|
||||
"1018/1018 [==============================] - 383s 376ms/step - loss: 1.6406 - accuracy: 0.4272 - val_loss: 1.7819 - val_accuracy: 0.4028\n",
|
||||
"Epoch 5/30\n",
|
||||
"1018/1018 [==============================] - 383s 376ms/step - loss: 1.4577 - accuracy: 0.4920 - val_loss: 1.7216 - val_accuracy: 0.4158\n",
|
||||
"Epoch 6/30\n",
|
||||
"1018/1018 [==============================] - 379s 372ms/step - loss: 1.2528 - accuracy: 0.5607 - val_loss: 1.7924 - val_accuracy: 0.4140\n",
|
||||
"Epoch 7/30\n",
|
||||
"1018/1018 [==============================] - 378s 371ms/step - loss: 1.0030 - accuracy: 0.6469 - val_loss: 1.8017 - val_accuracy: 0.4303\n",
|
||||
"Epoch 8/30\n",
|
||||
"1018/1018 [==============================] - 379s 372ms/step - loss: 0.7405 - accuracy: 0.7420 - val_loss: 1.9863 - val_accuracy: 0.4453\n",
|
||||
"Epoch 9/30\n",
|
||||
"1018/1018 [==============================] - 379s 372ms/step - loss: 0.4704 - accuracy: 0.8354 - val_loss: 2.3988 - val_accuracy: 0.4263\n",
|
||||
"Epoch 10/30\n",
|
||||
"1018/1018 [==============================] - 379s 372ms/step - loss: 0.3059 - accuracy: 0.8944 - val_loss: 2.7526 - val_accuracy: 0.4303\n",
|
||||
"Epoch 11/30\n",
|
||||
"1018/1018 [==============================] - 377s 371ms/step - loss: 0.2160 - accuracy: 0.9278 - val_loss: 3.0618 - val_accuracy: 0.4250\n",
|
||||
"Epoch 12/30\n",
|
||||
" 437/1018 [===========>..................] - ETA: 2:53 - loss: 0.1370 - accuracy: 0.9536"
|
||||
]
|
||||
},
|
||||
{
|
||||
"ename": "KeyboardInterrupt",
|
||||
"evalue": "",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
|
||||
"\u001b[0;32m<ipython-input-20-4cd4443bbf2a>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m model.fit(x=train_generator,\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0msteps_per_epoch\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtrain_generator\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0mvalidation_data\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mvalidation_generator\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mvalidation_steps\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mvalidation_generator\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mepochs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m30\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\u001b[0m in \u001b[0;36merror_handler\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 62\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 63\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 64\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 65\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;31m# pylint: disable=broad-except\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 66\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_process_traceback_frames\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__traceback__\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/keras/engine/training.py\u001b[0m in \u001b[0;36mfit\u001b[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)\u001b[0m\n\u001b[1;32m 1214\u001b[0m _r=1):\n\u001b[1;32m 1215\u001b[0m \u001b[0mcallbacks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_train_batch_begin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstep\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1216\u001b[0;31m \u001b[0mtmp_logs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain_function\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0miterator\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1217\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mdata_handler\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mshould_sync\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1218\u001b[0m \u001b[0mcontext\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0masync_wait\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py\u001b[0m in \u001b[0;36merror_handler\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 148\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 149\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 150\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 151\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 152\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_process_traceback_frames\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__traceback__\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 908\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 909\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mOptionalXlaContext\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_jit_compile\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 910\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 911\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 912\u001b[0m \u001b[0mnew_tracing_count\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexperimental_get_tracing_count\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\u001b[0m in \u001b[0;36m_call\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 940\u001b[0m \u001b[0;31m# In this case we have created variables on the first call, so we run the\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 941\u001b[0m \u001b[0;31m# defunned version which is guaranteed to never create variables.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 942\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_stateless_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# pylint: disable=not-callable\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 943\u001b[0m \u001b[0;32melif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_stateful_fn\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 944\u001b[0m \u001b[0;31m# Release the lock early so that multiple threads can perform the call\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 3128\u001b[0m (graph_function,\n\u001b[1;32m 3129\u001b[0m filtered_flat_args) = self._maybe_define_function(args, kwargs)\n\u001b[0;32m-> 3130\u001b[0;31m return graph_function._call_flat(\n\u001b[0m\u001b[1;32m 3131\u001b[0m filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access\n\u001b[1;32m 3132\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\u001b[0m in \u001b[0;36m_call_flat\u001b[0;34m(self, args, captured_inputs, cancellation_manager)\u001b[0m\n\u001b[1;32m 1957\u001b[0m and executing_eagerly):\n\u001b[1;32m 1958\u001b[0m \u001b[0;31m# No tape is watching; skip to running the function.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1959\u001b[0;31m return self._build_call_outputs(self._inference_function.call(\n\u001b[0m\u001b[1;32m 1960\u001b[0m ctx, args, cancellation_manager=cancellation_manager))\n\u001b[1;32m 1961\u001b[0m forward_backward = self._select_forward_and_backward_functions(\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\u001b[0m in \u001b[0;36mcall\u001b[0;34m(self, ctx, args, cancellation_manager)\u001b[0m\n\u001b[1;32m 596\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0m_InterpolateFunctionError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 597\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mcancellation_manager\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 598\u001b[0;31m outputs = execute.execute(\n\u001b[0m\u001b[1;32m 599\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msignature\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mname\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 600\u001b[0m \u001b[0mnum_outputs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_num_outputs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py\u001b[0m in \u001b[0;36mquick_execute\u001b[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)\u001b[0m\n\u001b[1;32m 56\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 57\u001b[0m \u001b[0mctx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mensure_initialized\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 58\u001b[0;31m tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\n\u001b[0m\u001b[1;32m 59\u001b[0m inputs, attrs, num_outputs)\n\u001b[1;32m 60\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mcore\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_NotOkStatusException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model.fit(x=train_generator,\n",
|
||||
" steps_per_epoch=len(train_generator),\n",
|
||||
" validation_data=validation_generator,\n",
|
||||
" validation_steps=len(validation_generator),\n",
|
||||
" epochs=30,\n",
|
||||
" verbose=1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "63f791af",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,882 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "572dc7fb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2022-08-01 22:09:35.958273: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"import cv\n",
|
||||
"from matplotlib.image import imread\n",
|
||||
"import pandas as pd\n",
|
||||
"from collections import Counter\n",
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
"import re\n",
|
||||
"import tempfile\n",
|
||||
"import numpy as np\n",
|
||||
"from os.path import exists\n",
|
||||
"from imblearn.under_sampling import RandomUnderSampler\n",
|
||||
"from PIL import ImageFile\n",
|
||||
"import sklearn as sk\n",
|
||||
"from sklearn.model_selection import train_test_split, StratifiedShuffleSplit\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import tensorflow.keras\n",
|
||||
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
|
||||
"from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten, Activation\n",
|
||||
"from tensorflow.keras.models import Sequential\n",
|
||||
"from tensorflow.keras.optimizers import Adam\n",
|
||||
"# custom modules\n",
|
||||
"import image_faults\n",
|
||||
"\n",
|
||||
"ImageFile.LOAD_TRUNCATED_IMAGES = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "a5c72863",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"image_faults.faulty_images() # removes faulty images\n",
|
||||
"df = pd.read_csv('expanded_class.csv', index_col=[0], low_memory=False)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "67ecdebe",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2022-08-01 22:09:39.570503: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set\n",
|
||||
"2022-08-01 22:09:39.571048: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1\n",
|
||||
"2022-08-01 22:09:39.613420: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.613584: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \n",
|
||||
"pciBusID: 0000:04:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6\n",
|
||||
"coreClock: 1.725GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s\n",
|
||||
"2022-08-01 22:09:39.613631: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.613751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties: \n",
|
||||
"pciBusID: 0000:0b:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6\n",
|
||||
"coreClock: 1.8GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s\n",
|
||||
"2022-08-01 22:09:39.613767: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n",
|
||||
"2022-08-01 22:09:39.614548: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10\n",
|
||||
"2022-08-01 22:09:39.614572: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.10\n",
|
||||
"2022-08-01 22:09:39.615415: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n",
|
||||
"2022-08-01 22:09:39.615547: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n",
|
||||
"2022-08-01 22:09:39.616317: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n",
|
||||
"2022-08-01 22:09:39.616763: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10\n",
|
||||
"2022-08-01 22:09:39.618472: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.7\n",
|
||||
"2022-08-01 22:09:39.618532: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.618687: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.618830: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.618969: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.619075: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1\n",
|
||||
"2022-08-01 22:09:39.619877: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA\n",
|
||||
"To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
|
||||
"2022-08-01 22:09:39.621856: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set\n",
|
||||
"2022-08-01 22:09:39.792333: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.792467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \n",
|
||||
"pciBusID: 0000:04:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6\n",
|
||||
"coreClock: 1.725GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s\n",
|
||||
"2022-08-01 22:09:39.792551: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.792644: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties: \n",
|
||||
"pciBusID: 0000:0b:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6\n",
|
||||
"coreClock: 1.8GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s\n",
|
||||
"2022-08-01 22:09:39.792680: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n",
|
||||
"2022-08-01 22:09:39.792696: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10\n",
|
||||
"2022-08-01 22:09:39.792706: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.10\n",
|
||||
"2022-08-01 22:09:39.792715: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n",
|
||||
"2022-08-01 22:09:39.792724: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n",
|
||||
"2022-08-01 22:09:39.792733: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n",
|
||||
"2022-08-01 22:09:39.792741: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10\n",
|
||||
"2022-08-01 22:09:39.792750: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.7\n",
|
||||
"2022-08-01 22:09:39.792797: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.792931: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.793053: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.793172: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:39.793263: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1\n",
|
||||
"2022-08-01 22:09:39.793290: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n",
|
||||
"2022-08-01 22:09:41.188032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
|
||||
"2022-08-01 22:09:41.188052: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 1 \n",
|
||||
"2022-08-01 22:09:41.188057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N N \n",
|
||||
"2022-08-01 22:09:41.188059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 1: N N \n",
|
||||
"2022-08-01 22:09:41.188316: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:41.188469: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:41.188599: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:41.188726: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:41.188831: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 22425 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:04:00.0, compute capability: 8.6)\n",
|
||||
"2022-08-01 22:09:41.189525: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:41.189665: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
|
||||
"2022-08-01 22:09:41.189758: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 21683 MB memory) -> physical GPU (device: 1, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:0b:00.0, compute capability: 8.6)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"mirrored_strategy = tf.distribute.MirroredStrategy(devices=[\"/gpu:0\",\"/gpu:1\"])\n",
|
||||
"#\"/gpu:0\","
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "a89913e0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def dict_pics_jup():\n",
|
||||
" '''\n",
|
||||
" {source:target} dict used to replace source urls with image location as input\n",
|
||||
" '''\n",
|
||||
" target_dir = os.getcwd() + os.sep + \"training_images\"\n",
|
||||
" with open('temp_pics_source_list.txt') as f:\n",
|
||||
" temp_pics_source_list = json.load(f)\n",
|
||||
" \n",
|
||||
" dict_pics = {}\n",
|
||||
" for k in temp_pics_source_list:\n",
|
||||
" try: \n",
|
||||
" patt_1 = re.search(r'[^/]+(?=/\\$_|.(\\.jpg|\\.jpeg|\\.png))', k, re.IGNORECASE)\n",
|
||||
" patt_2 = re.search(r'(\\.jpg|\\.jpeg|\\.png)', k, re.IGNORECASE)\n",
|
||||
" if patt_1 and patt_2 is not None:\n",
|
||||
" tag = patt_1.group() + patt_2.group().lower()\n",
|
||||
" file_name = target_dir + os.sep + tag\n",
|
||||
" dict_pics.update({k:file_name})\n",
|
||||
" except TypeError:\n",
|
||||
" print(k)\n",
|
||||
" print(\"{source:target} dictionary created @ \" + target_dir)\n",
|
||||
" return dict_pics\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 55,
|
||||
"id": "1057a442",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"nan\n",
|
||||
"{source:target} dictionary created @ /home/unknown/Sync/projects/ebay ML Lister Project/training_images\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"dict_pics = dict_pics_jup()\n",
|
||||
"\n",
|
||||
"with open('women_cat_list.txt') as f:\n",
|
||||
" women_cats = json.load(f)\n",
|
||||
"with open('men_cat_list.txt') as f:\n",
|
||||
" men_cats = json.load(f)\n",
|
||||
" \n",
|
||||
"with open('temp_pics_source_list.txt') as f:\n",
|
||||
" tempics = json.load(f)\n",
|
||||
"# list of image urls that did not get named properly which will be removed from the dataframe\n",
|
||||
"drop_row_vals = []\n",
|
||||
"for pic in tempics:\n",
|
||||
" try:\n",
|
||||
" dict_pics[pic]\n",
|
||||
" except KeyError:\n",
|
||||
" drop_row_vals.append(pic)\n",
|
||||
"\n",
|
||||
"df['PrimaryCategoryID'] = df['PrimaryCategoryID'].astype(str) # pandas thinks ids are ints\n",
|
||||
"df = df[df.PictureURL.isin(drop_row_vals)==False] # remove improperly named image files\n",
|
||||
"df = df[df.PrimaryCategoryID.isin(men_cats)==False] # removes rows of womens categories\n",
|
||||
"\n",
|
||||
"blah = pd.Series(df.PictureURL)\n",
|
||||
"df = df.drop(labels=['PictureURL'], axis=1)\n",
|
||||
"\n",
|
||||
"blah = blah.apply(lambda x: dict_pics[x])\n",
|
||||
"df = pd.concat([blah, df],axis=1)\n",
|
||||
"df = df.groupby('PrimaryCategoryID').filter(lambda x: len(x)>25) # removes cat outliers"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 78,
|
||||
"id": "7a6146e6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'/home/unknown/Sync/projects/ebay ML Lister Project/training_images/7BQAAOSw0eZhpmqM.jpg'"
|
||||
]
|
||||
},
|
||||
"execution_count": 78,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"df=df.sample(frac=1)\n",
|
||||
"something = df.iloc[1,0]\n",
|
||||
"something"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 60,
|
||||
"id": "114cc3c0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"undersample = RandomUnderSampler(sampling_strategy='auto')\n",
|
||||
"train, y_under = undersample.fit_resample(df, df['PrimaryCategoryID'])\n",
|
||||
"#print(Counter(train['PrimaryCategoryID']))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 61,
|
||||
"id": "506aa5cf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"train, test = train_test_split(train, test_size=0.2, random_state=42)\n",
|
||||
"# stratify=train['PrimaryCategoryID']\n",
|
||||
"# train['PrimaryCategoryID'].value_counts()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 80,
|
||||
"id": "4d72eb90",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/home/unknown/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/keras_preprocessing/image/dataframe_iterator.py:279: UserWarning: Found 5 invalid image filename(s) in x_col=\"PictureURL\". These filename(s) will be ignored.\n",
|
||||
" warnings.warn(\n",
|
||||
"/home/unknown/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/keras_preprocessing/image/dataframe_iterator.py:279: UserWarning: Found 5 invalid image filename(s) in x_col=\"PictureURL\". These filename(s) will be ignored.\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found 43744 validated image filenames belonging to 7 classes.\n",
|
||||
"Found 10935 validated image filenames belonging to 7 classes.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"datagen = ImageDataGenerator(rescale=1./255., \n",
|
||||
" validation_split=.2,\n",
|
||||
" #samplewise_std_normalization=True,\n",
|
||||
" #horizontal_flip= True,\n",
|
||||
" #vertical_flip= True,\n",
|
||||
" #width_shift_range= 0.2,\n",
|
||||
" #height_shift_range= 0.2,\n",
|
||||
" #rotation_range= 90,\n",
|
||||
" preprocessing_function=tf.keras.applications.xception.preprocess_input)\n",
|
||||
"\n",
|
||||
"train_generator=datagen.flow_from_dataframe(\n",
|
||||
" dataframe=train[:len(train)],\n",
|
||||
" directory='./training_images',\n",
|
||||
" x_col='PictureURL',\n",
|
||||
" y_col='PrimaryCategoryID',\n",
|
||||
" batch_size=56,\n",
|
||||
" seed=42,\n",
|
||||
" shuffle=True,\n",
|
||||
" target_size=(299,299),\n",
|
||||
" subset='training'\n",
|
||||
" )\n",
|
||||
"validation_generator=datagen.flow_from_dataframe(\n",
|
||||
" dataframe=train[:len(train)], # is using train right?\n",
|
||||
" directory='./training_images',\n",
|
||||
" x_col='PictureURL',\n",
|
||||
" y_col='PrimaryCategoryID',\n",
|
||||
" batch_size=56,\n",
|
||||
" seed=42,\n",
|
||||
" shuffle=True,\n",
|
||||
" target_size=(299,299),\n",
|
||||
" subset='validation'\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 81,
|
||||
"id": "7b70f37f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"imgs, labels = next(train_generator)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 82,
|
||||
"id": "1ed54bf5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def plotImages(images_arr):\n",
|
||||
" fig, axes = plt.subplots(1, 10, figsize=(20,20))\n",
|
||||
" axes = axes.flatten()\n",
|
||||
" for img, ax in zip( images_arr, axes):\n",
|
||||
" ax.imshow(img)\n",
|
||||
" ax.axis('off')\n",
|
||||
" plt.tight_layout()\n",
|
||||
" plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 83,
|
||||
"id": "85934565",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n",
|
||||
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"image/png": "iVBORw0KGgoAAAANSUhEUgAABZgAAACSCAYAAADIDq8FAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAALIUlEQVR4nO3d63KbSBCA0emtvP8rz/7QDckgwTDADJxT5U3irDFSVLH0pd1EzjkBAAAAAMBS/x19AgAAAAAA9ElgBgAAAACgiMAMAAAAAEARgRkAAAAAgCICMwAAAAAARQRmAAAAAACK/Pv2mxGR9zoR+pBzjjn/n8cOnzx2KOWxQymPHUp57FDKY4dSHjuUuuxjJ1JK57pFu5vz2Dnd44bVph43JpgBAAAAACgiMAMAAACcWES6Tf2ehblaaIrADAAAAHBqkeJUhRloydcdzAAAAAD0LWcjv8B2TDADAAAAAFBEYAYAAAC4AlsygA0IzAAAAABXYFMGsAGBGQAAAACAIgIzAAAAAABFBGYAAAAAAIoIzAAAAAAAFBGYAQAAAAAoIjADAAAAAFBEYAYAAAAAoIjADAAAAABAEYEZAAAAAIAiAjMAAAAAAEUEZgAAAAAAigjMAAAAAAAUEZgBAAAAACgiMAMAAAAAUERg5iaOPgEAAAAAoDcCMymle1+OJDQDAAAAALMJzKS4R2VtGQAAAABYQmC+uvj6SwAAAACASQLznlqrtzF+ShGvqWYAAAAAgCl9B+beKmg++gRu4suy5Zxvbynd7157mQEAAACACf+OPoE54ltIvv9ezo3U2waN3X+zm/Hjbo3UTCAHAAAAANrQfmA2PbtOpfsvKh4LAAAAADiHtgPzj6D5mFqOiOeUrknmdN+tPKcGj0w2v70rd7eFBAAAAADYT9s7mPPcUPoSMXHlOgq8ljDnZEMGAAAAAPCu7QnmmUwtf8hJZAcAAAAANtdVYB6G5K8X/ru8+vfN0klyAAAAAOD8mg7Mw4g8NqU8tnf57WNSttcBAAAAAGAjTQfmqdUXn9PLU9PMkeIWmS/HjgwAAAAAYHtNB+Ypn+F5LDA3sZc50qET1M/7IIY/CM8AAAAAQB3dBOaIaCMaz/AWvCOnI077bXI7v36IGJ6M2AwAAAAAlOsmMKe07MJ+TQXpsUnmraebJ479HGrWlgEAAACAlcoC85c4OnbhvTWWROVvH3tcbI7bnPDYzbi/z8UIAQAAAIAelU8wfwZTgXSdg/c1AwAAAAAsNT8w/7pQ3AYrF9ZML6d03NRyzvlvMI7nLPMf8X7nvo4BAAAAANCw2YF5MiwfYBhfP1dyrI3S1Xz24ZwmA/LYOTe1QxoAAAAAYESli/zlwX9TxVUPjwNNR+OeIuzUuebBHTY15QwAAAAA0JqvgXnJMPAWnTcPivXcyeScc3/Tvx+rNPYhZAMAAAAA6/yYYB6LkPdp5Z37bXMrMDayx+08+30IAAAAAOyjaEXGkcPBpZPJn7uat3T7XHnV/bTFeQrLAAAAAEBN/y39gB42T9zWZPx9326fv94S6jdFgThuq07EZQAAAACgtsWBuRdjPXm3yJwbCvEtnQsAAAAAcCr9BWaDuAAAAAAATeguMEfupzBbSwEAAAAAnFl3gXmr/cZnt+cOagAAAADgGroLzCmlPxfwa1XtqLv2eBIzAAAAAFDT8sDcQNw1jFvKHQcAAAAA1LM4MEe6TRCvmiJuIFJfUrZiBAAAAACop8sVGV2qFdXXHkdfBgAAAAAqKQjM8XyLuL0tP0Kd2trVIHStsCsQAwAAAACN2H2CudYF+uLRuQEAAAAAOMTqwJwXX3GvThXOKbq/2N/SXdYl0+Jjlv+ZAQAAAAD89W/NBx8aKk/RSG/B+NGNf92fOednZBaJAQAAAICjrQrMJeqF0f4D6yMYL7lPcspVbvowVgMAAAAAlNh9BzPTZgXf/rs6AAAAAHASqwKzCdj1htPL1l4AAAAAAD0xwQwAAAAAQJFVgdnELQAAAADAdZlgBgAAAACgiMB8VdZnAwAAAAArCcwAAAAAABQRmAEAAAAAKCIwX5XrMwIAAAAAKwnMAAAAAAAUEZgBAAAAACgiMAMAAAAAUERgBgAAAACgiMAMAAAAAECRFYE51zsLDhBHnwAAAAAA0DkTzAAAAAAAFCkOzNkAMwAAAADApZlgvqiwIQMAAAAAWKk8MAuU/VKXAQAAAIAKTDBfkLwMAAAAANQgMAMAAAAAUERgBgAAAACgiMAMAAAAAECR8sCcK54FAAAAAADdMcEMAAAAAEARgflDxNFnAAAAAADQh3/LPySnbD0GAAAAAMDlLZ5gPntbFs8BAAAAAOZZviJDgO1a2AECAAAAAFRiB/MH/RUAAAAAYB6BeYzIDAAAAADwk8AMAAAAAEARgfnDqS/yZzIbAAAAAKhIYL4UhRkAAAAAqEdgHnPSKWZ5GQAAAACoaXlgVin75M8NAAAAAKjMBPNFhMIMAAAAAFS2ODDLlAAAAAAApGSC+TJyzum0y6UBAAAAgEMUBOZIESecY46Pn5/wJgIAAAAA1GSC+SFP/BwAAAAAgFECMwAAAAAARcoD84lXSJxxA0hKBrMBAAAAgLqKA/NpGuzoDYmvty8+9zUDAAAAAFzQqhUZvbfViFQ01puNAgMAAAAArAnM0e0uiYhIEfE9FOffty3i16wzAAAAAMB5/Tv6BPYUWwXxwkloAAAAAICerVqR0ZNhXM45p/xrz0X8DtKPQ0S6TUS3PMwcEaatAYD1PJ0AAAAGVgfm3rZk/AzLq47X79oQAICU0u+A7Lu2AACAgQorMtrdD7HZSowPOef3fcwRt3vE1QABgF48njblwa/zl98HAABIZ93BPBKWa08uzz+P7IUYANC+z+crv34NAACQThSYp6eV846DxK+Rn9fZ3HYzbxK4504S2doBAAAAAGzgNIF5zJ5Ty8NPNda6I6LO+QyjskkiAAAAAOBAJwrMr9pa2nFHp6ALVkw/djK/DvB+/GFojhQp//gEEe/rENPCUzLADAAAAABsoU5grnWdvwX19NZqh+l05SqMiQpbetMe5/L7OoN5xsUI1965EjMAAAAAUF9bE8w5TYfewftzfgTcijsivnzuVYd9lubH4WNGdAYAriFSivtzBauvAICNjH1HNUAtVQLznDUPs00cZuu/A6ebb6Xx7Pw4ysexIqUYOX6e2InhSwEAnMPrO5juL/jO9lU+3i977AUtAACcU1sTzEf6NlVcawXImPzj0F6LAQAdGbsOBQAAcF7VAnNEdD6ZsvkM8yHCizsAaMf9yr1jX52H12S4bQJr9NnHICDPeZZxuslsAOiYVRnAFkwwpzRrwCai3dd5U8RlAGjHMyDPed6R2vvH7d8XJf6itRsDAABU81/Ng/V48bqImBFiI3V5cb5IvjsVABrxOSmUc36+jVkVdDdQMumU869dYAAAQO+qBuaU0umDZmOv9QCAjkwF5b/vz2lyl8aBlkRmqzEAAOAaKq/IiBSdvJwomwqK+3/bv41COAC0a+qZxDPgtrgjYwF7HQGgbW/XfvB1G1hpmx3Mnb8o+unx9/CZbyMAsJ2TPodo/5/gAQCA2uqvyEj3ncYNT9Cu32l429rc9pRw0ycHAHzTa6e9b/YAAACuY4PAfBXzrwS/Kxf2AwAO4NtrAQDgmrZZkZHSc4o5p3auHl7/auxx3wbSxg283Tx1GQDYl7gMAP2KCF/LgVU2C8wPt9Cc01F/V9WPyr8/x95/Me9xGwEAhrwQBYDzEJmBNXZbkXFIAr1Cd73CbQQAmpFzTodNDgAAAM3ZfIL5JtLrun/bTzMfPdE7/Pxb/Qvg0bcRALgueRkAAHg45CJ/seWF6Frrrq2dDwDAEp7LAMA1xOANYIGwYwcAAAAAgBKHTDADAAAAANA/gRkAAAAAgCICMwAAAAAARQRmAAAAAACKCMwAAAAAABQRmAEAAAAAKPI/pCct2vtE3uwAAAAASUVORK5CYII=\n",
|
||||
"text/plain": [
|
||||
"<Figure size 1440x1440 with 10 Axes>"
|
||||
]
|
||||
},
|
||||
"metadata": {
|
||||
"needs_background": "light"
|
||||
},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"plotImages(imgs)\n",
|
||||
"# image = plt.imread('training_images/0t0AAOSw4tNgSQ1j.jpg')\n",
|
||||
"# plt.imshow(image)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 84,
|
||||
"id": "6322bcad",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#physical_devices = tf.config.list_physical_devices('GPU')\n",
|
||||
"#print(len(physical_devices))\n",
|
||||
"#print(physical_devices)\n",
|
||||
"#for gpu_instance in physical_devices: \n",
|
||||
"# tf.config.experimental.set_memory_growth(gpu_instance, True)\n",
|
||||
"#tf.config.experimental.set_memory_growth(physical_devices[0], True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 85,
|
||||
"id": "07fd25c6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# see https://www.kaggle.com/dmitrypukhov/cnn-with-imagedatagenerator-flow-from-dataframe for train/test/val split \n",
|
||||
"# example\n",
|
||||
"\n",
|
||||
"# may need to either create a test dataset from the original dataset or just download a new one"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 86,
|
||||
"id": "fe06f2bf",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Model: \"model_5\"\n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"Layer (type) Output Shape Param # Connected to \n",
|
||||
"==================================================================================================\n",
|
||||
"input_6 (InputLayer) [(None, 299, 299, 3) 0 \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block1_conv1 (Conv2D) (None, 149, 149, 32) 864 input_6[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block1_conv1_bn (BatchNormaliza (None, 149, 149, 32) 128 block1_conv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block1_conv1_act (Activation) (None, 149, 149, 32) 0 block1_conv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block1_conv2 (Conv2D) (None, 147, 147, 64) 18432 block1_conv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block1_conv2_bn (BatchNormaliza (None, 147, 147, 64) 256 block1_conv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block1_conv2_act (Activation) (None, 147, 147, 64) 0 block1_conv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block2_sepconv1 (SeparableConv2 (None, 147, 147, 128 8768 block1_conv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block2_sepconv1_bn (BatchNormal (None, 147, 147, 128 512 block2_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block2_sepconv2_act (Activation (None, 147, 147, 128 0 block2_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block2_sepconv2 (SeparableConv2 (None, 147, 147, 128 17536 block2_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block2_sepconv2_bn (BatchNormal (None, 147, 147, 128 512 block2_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"conv2d_20 (Conv2D) (None, 74, 74, 128) 8192 block1_conv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block2_pool (MaxPooling2D) (None, 74, 74, 128) 0 block2_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"batch_normalization_20 (BatchNo (None, 74, 74, 128) 512 conv2d_20[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_60 (Add) (None, 74, 74, 128) 0 block2_pool[0][0] \n",
|
||||
" batch_normalization_20[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block3_sepconv1_act (Activation (None, 74, 74, 128) 0 add_60[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block3_sepconv1 (SeparableConv2 (None, 74, 74, 256) 33920 block3_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block3_sepconv1_bn (BatchNormal (None, 74, 74, 256) 1024 block3_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block3_sepconv2_act (Activation (None, 74, 74, 256) 0 block3_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block3_sepconv2 (SeparableConv2 (None, 74, 74, 256) 67840 block3_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block3_sepconv2_bn (BatchNormal (None, 74, 74, 256) 1024 block3_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"conv2d_21 (Conv2D) (None, 37, 37, 256) 32768 add_60[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block3_pool (MaxPooling2D) (None, 37, 37, 256) 0 block3_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"batch_normalization_21 (BatchNo (None, 37, 37, 256) 1024 conv2d_21[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_61 (Add) (None, 37, 37, 256) 0 block3_pool[0][0] \n",
|
||||
" batch_normalization_21[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block4_sepconv1_act (Activation (None, 37, 37, 256) 0 add_61[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block4_sepconv1 (SeparableConv2 (None, 37, 37, 728) 188672 block4_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block4_sepconv1_bn (BatchNormal (None, 37, 37, 728) 2912 block4_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block4_sepconv2_act (Activation (None, 37, 37, 728) 0 block4_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block4_sepconv2 (SeparableConv2 (None, 37, 37, 728) 536536 block4_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block4_sepconv2_bn (BatchNormal (None, 37, 37, 728) 2912 block4_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"conv2d_22 (Conv2D) (None, 19, 19, 728) 186368 add_61[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block4_pool (MaxPooling2D) (None, 19, 19, 728) 0 block4_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"batch_normalization_22 (BatchNo (None, 19, 19, 728) 2912 conv2d_22[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_62 (Add) (None, 19, 19, 728) 0 block4_pool[0][0] \n",
|
||||
" batch_normalization_22[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv1_act (Activation (None, 19, 19, 728) 0 add_62[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv1 (SeparableConv2 (None, 19, 19, 728) 536536 block5_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv1_bn (BatchNormal (None, 19, 19, 728) 2912 block5_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv2_act (Activation (None, 19, 19, 728) 0 block5_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv2 (SeparableConv2 (None, 19, 19, 728) 536536 block5_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv2_bn (BatchNormal (None, 19, 19, 728) 2912 block5_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv3_act (Activation (None, 19, 19, 728) 0 block5_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv3 (SeparableConv2 (None, 19, 19, 728) 536536 block5_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block5_sepconv3_bn (BatchNormal (None, 19, 19, 728) 2912 block5_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_63 (Add) (None, 19, 19, 728) 0 block5_sepconv3_bn[0][0] \n",
|
||||
" add_62[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv1_act (Activation (None, 19, 19, 728) 0 add_63[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv1 (SeparableConv2 (None, 19, 19, 728) 536536 block6_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv1_bn (BatchNormal (None, 19, 19, 728) 2912 block6_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv2_act (Activation (None, 19, 19, 728) 0 block6_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv2 (SeparableConv2 (None, 19, 19, 728) 536536 block6_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv2_bn (BatchNormal (None, 19, 19, 728) 2912 block6_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv3_act (Activation (None, 19, 19, 728) 0 block6_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv3 (SeparableConv2 (None, 19, 19, 728) 536536 block6_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block6_sepconv3_bn (BatchNormal (None, 19, 19, 728) 2912 block6_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_64 (Add) (None, 19, 19, 728) 0 block6_sepconv3_bn[0][0] \n",
|
||||
" add_63[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv1_act (Activation (None, 19, 19, 728) 0 add_64[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv1 (SeparableConv2 (None, 19, 19, 728) 536536 block7_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv1_bn (BatchNormal (None, 19, 19, 728) 2912 block7_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv2_act (Activation (None, 19, 19, 728) 0 block7_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv2 (SeparableConv2 (None, 19, 19, 728) 536536 block7_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv2_bn (BatchNormal (None, 19, 19, 728) 2912 block7_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv3_act (Activation (None, 19, 19, 728) 0 block7_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv3 (SeparableConv2 (None, 19, 19, 728) 536536 block7_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block7_sepconv3_bn (BatchNormal (None, 19, 19, 728) 2912 block7_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_65 (Add) (None, 19, 19, 728) 0 block7_sepconv3_bn[0][0] \n",
|
||||
" add_64[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv1_act (Activation (None, 19, 19, 728) 0 add_65[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv1 (SeparableConv2 (None, 19, 19, 728) 536536 block8_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv1_bn (BatchNormal (None, 19, 19, 728) 2912 block8_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv2_act (Activation (None, 19, 19, 728) 0 block8_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv2 (SeparableConv2 (None, 19, 19, 728) 536536 block8_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv2_bn (BatchNormal (None, 19, 19, 728) 2912 block8_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv3_act (Activation (None, 19, 19, 728) 0 block8_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv3 (SeparableConv2 (None, 19, 19, 728) 536536 block8_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block8_sepconv3_bn (BatchNormal (None, 19, 19, 728) 2912 block8_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_66 (Add) (None, 19, 19, 728) 0 block8_sepconv3_bn[0][0] \n",
|
||||
" add_65[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv1_act (Activation (None, 19, 19, 728) 0 add_66[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv1 (SeparableConv2 (None, 19, 19, 728) 536536 block9_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv1_bn (BatchNormal (None, 19, 19, 728) 2912 block9_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv2_act (Activation (None, 19, 19, 728) 0 block9_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv2 (SeparableConv2 (None, 19, 19, 728) 536536 block9_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv2_bn (BatchNormal (None, 19, 19, 728) 2912 block9_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv3_act (Activation (None, 19, 19, 728) 0 block9_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv3 (SeparableConv2 (None, 19, 19, 728) 536536 block9_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block9_sepconv3_bn (BatchNormal (None, 19, 19, 728) 2912 block9_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_67 (Add) (None, 19, 19, 728) 0 block9_sepconv3_bn[0][0] \n",
|
||||
" add_66[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv1_act (Activatio (None, 19, 19, 728) 0 add_67[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv1 (SeparableConv (None, 19, 19, 728) 536536 block10_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv1_bn (BatchNorma (None, 19, 19, 728) 2912 block10_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv2_act (Activatio (None, 19, 19, 728) 0 block10_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv2 (SeparableConv (None, 19, 19, 728) 536536 block10_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv2_bn (BatchNorma (None, 19, 19, 728) 2912 block10_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv3_act (Activatio (None, 19, 19, 728) 0 block10_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv3 (SeparableConv (None, 19, 19, 728) 536536 block10_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block10_sepconv3_bn (BatchNorma (None, 19, 19, 728) 2912 block10_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_68 (Add) (None, 19, 19, 728) 0 block10_sepconv3_bn[0][0] \n",
|
||||
" add_67[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv1_act (Activatio (None, 19, 19, 728) 0 add_68[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv1 (SeparableConv (None, 19, 19, 728) 536536 block11_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv1_bn (BatchNorma (None, 19, 19, 728) 2912 block11_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv2_act (Activatio (None, 19, 19, 728) 0 block11_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv2 (SeparableConv (None, 19, 19, 728) 536536 block11_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv2_bn (BatchNorma (None, 19, 19, 728) 2912 block11_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv3_act (Activatio (None, 19, 19, 728) 0 block11_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv3 (SeparableConv (None, 19, 19, 728) 536536 block11_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block11_sepconv3_bn (BatchNorma (None, 19, 19, 728) 2912 block11_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_69 (Add) (None, 19, 19, 728) 0 block11_sepconv3_bn[0][0] \n",
|
||||
" add_68[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv1_act (Activatio (None, 19, 19, 728) 0 add_69[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv1 (SeparableConv (None, 19, 19, 728) 536536 block12_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv1_bn (BatchNorma (None, 19, 19, 728) 2912 block12_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv2_act (Activatio (None, 19, 19, 728) 0 block12_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv2 (SeparableConv (None, 19, 19, 728) 536536 block12_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv2_bn (BatchNorma (None, 19, 19, 728) 2912 block12_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv3_act (Activatio (None, 19, 19, 728) 0 block12_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv3 (SeparableConv (None, 19, 19, 728) 536536 block12_sepconv3_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block12_sepconv3_bn (BatchNorma (None, 19, 19, 728) 2912 block12_sepconv3[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_70 (Add) (None, 19, 19, 728) 0 block12_sepconv3_bn[0][0] \n",
|
||||
" add_69[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block13_sepconv1_act (Activatio (None, 19, 19, 728) 0 add_70[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block13_sepconv1 (SeparableConv (None, 19, 19, 728) 536536 block13_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block13_sepconv1_bn (BatchNorma (None, 19, 19, 728) 2912 block13_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block13_sepconv2_act (Activatio (None, 19, 19, 728) 0 block13_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block13_sepconv2 (SeparableConv (None, 19, 19, 1024) 752024 block13_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block13_sepconv2_bn (BatchNorma (None, 19, 19, 1024) 4096 block13_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"conv2d_23 (Conv2D) (None, 10, 10, 1024) 745472 add_70[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block13_pool (MaxPooling2D) (None, 10, 10, 1024) 0 block13_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"batch_normalization_23 (BatchNo (None, 10, 10, 1024) 4096 conv2d_23[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"add_71 (Add) (None, 10, 10, 1024) 0 block13_pool[0][0] \n",
|
||||
" batch_normalization_23[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block14_sepconv1 (SeparableConv (None, 10, 10, 1536) 1582080 add_71[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block14_sepconv1_bn (BatchNorma (None, 10, 10, 1536) 6144 block14_sepconv1[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block14_sepconv1_act (Activatio (None, 10, 10, 1536) 0 block14_sepconv1_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block14_sepconv2 (SeparableConv (None, 10, 10, 2048) 3159552 block14_sepconv1_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block14_sepconv2_bn (BatchNorma (None, 10, 10, 2048) 8192 block14_sepconv2[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"block14_sepconv2_act (Activatio (None, 10, 10, 2048) 0 block14_sepconv2_bn[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"avg_pool (GlobalAveragePooling2 (None, 2048) 0 block14_sepconv2_act[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"predictions (Dense) (None, 1000) 2049000 avg_pool[0][0] \n",
|
||||
"__________________________________________________________________________________________________\n",
|
||||
"dense_5 (Dense) (None, 7) 7007 predictions[0][0] \n",
|
||||
"==================================================================================================\n",
|
||||
"Total params: 22,917,487\n",
|
||||
"Trainable params: 22,862,959\n",
|
||||
"Non-trainable params: 54,528\n",
|
||||
"__________________________________________________________________________________________________\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"with mirrored_strategy.scope(): # for training on dual gpus\n",
|
||||
"# physical_devices = tf.config.list_physical_devices('GPU')\n",
|
||||
"# tf.config.experimental.set_memory_growth(physical_devices[0], True)\n",
|
||||
" base_model = tf.keras.applications.xception.Xception(include_top=True, pooling='avg')\n",
|
||||
" for layer in base_model.layers:\n",
|
||||
" layer.trainable = True\n",
|
||||
" output = Dense(7, activation='softmax')(base_model.output)\n",
|
||||
" model = tf.keras.Model(base_model.input, output)\n",
|
||||
" model.compile(optimizer=Adam(learning_rate=.001), loss='categorical_crossentropy',\n",
|
||||
" metrics=['accuracy'])\n",
|
||||
"# sparse_categorical_crossentropy\n",
|
||||
"#model = add_regularization(model)\n",
|
||||
"model.summary()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 87,
|
||||
"id": "9cd2ba27",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2022-08-01 23:37:26.275294: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:656] In AUTO-mode, and switching to DATA-based sharding, instead of FILE-based sharding as we cannot find appropriate reader dataset op(s) to shard. Error: Did not find a shardable source, walked to a node which is not a dataset: name: \"FlatMapDataset/_2\"\n",
|
||||
"op: \"FlatMapDataset\"\n",
|
||||
"input: \"TensorDataset/_1\"\n",
|
||||
"attr {\n",
|
||||
" key: \"Targuments\"\n",
|
||||
" value {\n",
|
||||
" list {\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"attr {\n",
|
||||
" key: \"f\"\n",
|
||||
" value {\n",
|
||||
" func {\n",
|
||||
" name: \"__inference_Dataset_flat_map_flat_map_fn_93447\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"attr {\n",
|
||||
" key: \"output_shapes\"\n",
|
||||
" value {\n",
|
||||
" list {\n",
|
||||
" shape {\n",
|
||||
" dim {\n",
|
||||
" size: -1\n",
|
||||
" }\n",
|
||||
" dim {\n",
|
||||
" size: -1\n",
|
||||
" }\n",
|
||||
" dim {\n",
|
||||
" size: -1\n",
|
||||
" }\n",
|
||||
" dim {\n",
|
||||
" size: -1\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" shape {\n",
|
||||
" dim {\n",
|
||||
" size: -1\n",
|
||||
" }\n",
|
||||
" dim {\n",
|
||||
" size: -1\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"attr {\n",
|
||||
" key: \"output_types\"\n",
|
||||
" value {\n",
|
||||
" list {\n",
|
||||
" type: DT_FLOAT\n",
|
||||
" type: DT_FLOAT\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
". Consider either turning off auto-sharding or switching the auto_shard_policy to DATA to shard this dataset. You can do this by creating a new `tf.data.Options()` object then setting `options.experimental_distribute.auto_shard_policy = AutoShardPolicy.DATA` before applying the options object to the dataset via `dataset.with_options(options)`.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Epoch 1/6\n",
|
||||
"INFO:tensorflow:batch_all_reduce: 158 all-reduces with algorithm = nccl, num_packs = 1\n",
|
||||
"INFO:tensorflow:batch_all_reduce: 158 all-reduces with algorithm = nccl, num_packs = 1\n",
|
||||
" 17/782 [..............................] - ETA: 6:35 - loss: 1.9460 - accuracy: 0.1428"
|
||||
]
|
||||
},
|
||||
{
|
||||
"ename": "KeyboardInterrupt",
|
||||
"evalue": "",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
|
||||
"Input \u001b[0;32mIn [87]\u001b[0m, in \u001b[0;36m<cell line: 1>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mmodel\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfit\u001b[49m\u001b[43m(\u001b[49m\u001b[43mx\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtrain_generator\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2\u001b[0m \u001b[43m \u001b[49m\u001b[43msteps_per_epoch\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mlen\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mtrain_generator\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3\u001b[0m \u001b[43m \u001b[49m\u001b[43mvalidation_data\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mvalidation_generator\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 4\u001b[0m \u001b[43m \u001b[49m\u001b[43mvalidation_steps\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mlen\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mvalidation_generator\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 5\u001b[0m \u001b[43m \u001b[49m\u001b[43mepochs\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m6\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 6\u001b[0m \u001b[43m \u001b[49m\u001b[43mverbose\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/engine/training.py:1100\u001b[0m, in \u001b[0;36mModel.fit\u001b[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)\u001b[0m\n\u001b[1;32m 1093\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m trace\u001b[38;5;241m.\u001b[39mTrace(\n\u001b[1;32m 1094\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mtrain\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[1;32m 1095\u001b[0m epoch_num\u001b[38;5;241m=\u001b[39mepoch,\n\u001b[1;32m 1096\u001b[0m step_num\u001b[38;5;241m=\u001b[39mstep,\n\u001b[1;32m 1097\u001b[0m batch_size\u001b[38;5;241m=\u001b[39mbatch_size,\n\u001b[1;32m 1098\u001b[0m _r\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m1\u001b[39m):\n\u001b[1;32m 1099\u001b[0m callbacks\u001b[38;5;241m.\u001b[39mon_train_batch_begin(step)\n\u001b[0;32m-> 1100\u001b[0m tmp_logs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtrain_function\u001b[49m\u001b[43m(\u001b[49m\u001b[43miterator\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1101\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m data_handler\u001b[38;5;241m.\u001b[39mshould_sync:\n\u001b[1;32m 1102\u001b[0m context\u001b[38;5;241m.\u001b[39masync_wait()\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/eager/def_function.py:828\u001b[0m, in \u001b[0;36mFunction.__call__\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 826\u001b[0m tracing_count \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mexperimental_get_tracing_count()\n\u001b[1;32m 827\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m trace\u001b[38;5;241m.\u001b[39mTrace(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_name) \u001b[38;5;28;01mas\u001b[39;00m tm:\n\u001b[0;32m--> 828\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwds\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 829\u001b[0m compiler \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mxla\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_experimental_compile \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnonXla\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 830\u001b[0m new_tracing_count \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mexperimental_get_tracing_count()\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/eager/def_function.py:855\u001b[0m, in \u001b[0;36mFunction._call\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 852\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_lock\u001b[38;5;241m.\u001b[39mrelease()\n\u001b[1;32m 853\u001b[0m \u001b[38;5;66;03m# In this case we have created variables on the first call, so we run the\u001b[39;00m\n\u001b[1;32m 854\u001b[0m \u001b[38;5;66;03m# defunned version which is guaranteed to never create variables.\u001b[39;00m\n\u001b[0;32m--> 855\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_stateless_fn\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwds\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;66;03m# pylint: disable=not-callable\u001b[39;00m\n\u001b[1;32m 856\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_stateful_fn \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 857\u001b[0m \u001b[38;5;66;03m# Release the lock early so that multiple threads can perform the call\u001b[39;00m\n\u001b[1;32m 858\u001b[0m \u001b[38;5;66;03m# in parallel.\u001b[39;00m\n\u001b[1;32m 859\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_lock\u001b[38;5;241m.\u001b[39mrelease()\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/eager/function.py:2942\u001b[0m, in \u001b[0;36mFunction.__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 2939\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_lock:\n\u001b[1;32m 2940\u001b[0m (graph_function,\n\u001b[1;32m 2941\u001b[0m filtered_flat_args) \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_maybe_define_function(args, kwargs)\n\u001b[0;32m-> 2942\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mgraph_function\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call_flat\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 2943\u001b[0m \u001b[43m \u001b[49m\u001b[43mfiltered_flat_args\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcaptured_inputs\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mgraph_function\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcaptured_inputs\u001b[49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/eager/function.py:1918\u001b[0m, in \u001b[0;36mConcreteFunction._call_flat\u001b[0;34m(self, args, captured_inputs, cancellation_manager)\u001b[0m\n\u001b[1;32m 1914\u001b[0m possible_gradient_type \u001b[38;5;241m=\u001b[39m gradients_util\u001b[38;5;241m.\u001b[39mPossibleTapeGradientTypes(args)\n\u001b[1;32m 1915\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (possible_gradient_type \u001b[38;5;241m==\u001b[39m gradients_util\u001b[38;5;241m.\u001b[39mPOSSIBLE_GRADIENT_TYPES_NONE\n\u001b[1;32m 1916\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m executing_eagerly):\n\u001b[1;32m 1917\u001b[0m \u001b[38;5;66;03m# No tape is watching; skip to running the function.\u001b[39;00m\n\u001b[0;32m-> 1918\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_build_call_outputs(\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_inference_function\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcall\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 1919\u001b[0m \u001b[43m \u001b[49m\u001b[43mctx\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcancellation_manager\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcancellation_manager\u001b[49m\u001b[43m)\u001b[49m)\n\u001b[1;32m 1920\u001b[0m forward_backward \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_select_forward_and_backward_functions(\n\u001b[1;32m 1921\u001b[0m args,\n\u001b[1;32m 1922\u001b[0m possible_gradient_type,\n\u001b[1;32m 1923\u001b[0m executing_eagerly)\n\u001b[1;32m 1924\u001b[0m forward_function, args_with_tangents \u001b[38;5;241m=\u001b[39m forward_backward\u001b[38;5;241m.\u001b[39mforward()\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/eager/function.py:555\u001b[0m, in \u001b[0;36m_EagerDefinedFunction.call\u001b[0;34m(self, ctx, args, cancellation_manager)\u001b[0m\n\u001b[1;32m 553\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m _InterpolateFunctionError(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m 554\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m cancellation_manager \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m--> 555\u001b[0m outputs \u001b[38;5;241m=\u001b[39m \u001b[43mexecute\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mexecute\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 556\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msignature\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mname\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 557\u001b[0m \u001b[43m \u001b[49m\u001b[43mnum_outputs\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_num_outputs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 558\u001b[0m \u001b[43m \u001b[49m\u001b[43minputs\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 559\u001b[0m \u001b[43m \u001b[49m\u001b[43mattrs\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mattrs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 560\u001b[0m \u001b[43m \u001b[49m\u001b[43mctx\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mctx\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 561\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 562\u001b[0m outputs \u001b[38;5;241m=\u001b[39m execute\u001b[38;5;241m.\u001b[39mexecute_with_cancellation(\n\u001b[1;32m 563\u001b[0m \u001b[38;5;28mstr\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msignature\u001b[38;5;241m.\u001b[39mname),\n\u001b[1;32m 564\u001b[0m num_outputs\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_num_outputs,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 567\u001b[0m ctx\u001b[38;5;241m=\u001b[39mctx,\n\u001b[1;32m 568\u001b[0m cancellation_manager\u001b[38;5;241m=\u001b[39mcancellation_manager)\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/eager/execute.py:59\u001b[0m, in \u001b[0;36mquick_execute\u001b[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)\u001b[0m\n\u001b[1;32m 57\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 58\u001b[0m ctx\u001b[38;5;241m.\u001b[39mensure_initialized()\n\u001b[0;32m---> 59\u001b[0m tensors \u001b[38;5;241m=\u001b[39m \u001b[43mpywrap_tfe\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mTFE_Py_Execute\u001b[49m\u001b[43m(\u001b[49m\u001b[43mctx\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_handle\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdevice_name\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mop_name\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 60\u001b[0m \u001b[43m \u001b[49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mattrs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mnum_outputs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 61\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m core\u001b[38;5;241m.\u001b[39m_NotOkStatusException \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 62\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m name \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\n",
|
||||
"model.fit(x=train_generator,\n",
|
||||
" steps_per_epoch=len(train_generator),\n",
|
||||
" validation_data=validation_generator,\n",
|
||||
" validation_steps=len(validation_generator),\n",
|
||||
" epochs=6,\n",
|
||||
" verbose=1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "63f791af",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model.save(\"Model_1.h5\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,105 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "99d6b339",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2022-08-01 21:12:17.069258: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from sklearn.datasets import fetch_openml\n",
|
||||
"import matplotlib as mpl\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"from sklearn.linear_model import SGDClassifier\n",
|
||||
"from sklearn.model_selection import StratifiedKFold, cross_val_predict, train_test_split, StratifiedShuffleSplit,cross_val_score\n",
|
||||
"from sklearn.base import clone, BaseEstimator\n",
|
||||
"from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve, roc_curve, roc_auc_score\n",
|
||||
"from sklearn.ensemble import RandomForestClassifier\n",
|
||||
"from sklearn.svm import SVC\n",
|
||||
"from sklearn.multiclass import OneVsRestClassifier\n",
|
||||
"from sklearn.preprocessing import StandardScaler\n",
|
||||
"from sklearn.neighbors import KNeighborsClassifier\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"import tensorflow as tf\n",
|
||||
"\n",
|
||||
"import joblib"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "20c2c97e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"ename": "TypeError",
|
||||
"evalue": "('Keyword argument not understood:', 'keepdims')",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
|
||||
"Input \u001b[0;32mIn [7]\u001b[0m, in \u001b[0;36m<cell line: 1>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m new_model \u001b[38;5;241m=\u001b[39m \u001b[43mtf\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mkeras\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmodels\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mload_model\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mModel_1.h5\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/saving/save.py:206\u001b[0m, in \u001b[0;36mload_model\u001b[0;34m(filepath, custom_objects, compile, options)\u001b[0m\n\u001b[1;32m 203\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m load_context\u001b[38;5;241m.\u001b[39mload_context(options):\n\u001b[1;32m 204\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (h5py \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m\n\u001b[1;32m 205\u001b[0m (\u001b[38;5;28misinstance\u001b[39m(filepath, h5py\u001b[38;5;241m.\u001b[39mFile) \u001b[38;5;129;01mor\u001b[39;00m h5py\u001b[38;5;241m.\u001b[39mis_hdf5(filepath))):\n\u001b[0;32m--> 206\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mhdf5_format\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mload_model_from_hdf5\u001b[49m\u001b[43m(\u001b[49m\u001b[43mfilepath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 207\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mcompile\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 209\u001b[0m filepath \u001b[38;5;241m=\u001b[39m path_to_string(filepath)\n\u001b[1;32m 210\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(filepath, six\u001b[38;5;241m.\u001b[39mstring_types):\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/saving/hdf5_format.py:183\u001b[0m, in \u001b[0;36mload_model_from_hdf5\u001b[0;34m(filepath, custom_objects, compile)\u001b[0m\n\u001b[1;32m 181\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mNo model found in config file.\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 182\u001b[0m model_config \u001b[38;5;241m=\u001b[39m json_utils\u001b[38;5;241m.\u001b[39mdecode(model_config\u001b[38;5;241m.\u001b[39mdecode(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mutf-8\u001b[39m\u001b[38;5;124m'\u001b[39m))\n\u001b[0;32m--> 183\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[43mmodel_config_lib\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmodel_from_config\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_config\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 184\u001b[0m \u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcustom_objects\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 186\u001b[0m \u001b[38;5;66;03m# set weights\u001b[39;00m\n\u001b[1;32m 187\u001b[0m load_weights_from_hdf5_group(f[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmodel_weights\u001b[39m\u001b[38;5;124m'\u001b[39m], model\u001b[38;5;241m.\u001b[39mlayers)\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/saving/model_config.py:64\u001b[0m, in \u001b[0;36mmodel_from_config\u001b[0;34m(config, custom_objects)\u001b[0m\n\u001b[1;32m 60\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m`model_from_config` expects a dictionary, not a list. \u001b[39m\u001b[38;5;124m'\u001b[39m\n\u001b[1;32m 61\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mMaybe you meant to use \u001b[39m\u001b[38;5;124m'\u001b[39m\n\u001b[1;32m 62\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m`Sequential.from_config(config)`?\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 63\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtensorflow\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mpython\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mkeras\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mlayers\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m deserialize \u001b[38;5;66;03m# pylint: disable=g-import-not-at-top\u001b[39;00m\n\u001b[0;32m---> 64\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mdeserialize\u001b[49m\u001b[43m(\u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcustom_objects\u001b[49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/layers/serialization.py:173\u001b[0m, in \u001b[0;36mdeserialize\u001b[0;34m(config, custom_objects)\u001b[0m\n\u001b[1;32m 162\u001b[0m \u001b[38;5;124;03m\"\"\"Instantiates a layer from a config dictionary.\u001b[39;00m\n\u001b[1;32m 163\u001b[0m \n\u001b[1;32m 164\u001b[0m \u001b[38;5;124;03mArguments:\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 170\u001b[0m \u001b[38;5;124;03m Layer instance (may be Model, Sequential, Network, Layer...)\u001b[39;00m\n\u001b[1;32m 171\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 172\u001b[0m populate_deserializable_objects()\n\u001b[0;32m--> 173\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mgeneric_utils\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdeserialize_keras_object\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 174\u001b[0m \u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 175\u001b[0m \u001b[43m \u001b[49m\u001b[43mmodule_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mLOCAL\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mALL_OBJECTS\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 176\u001b[0m \u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcustom_objects\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 177\u001b[0m \u001b[43m \u001b[49m\u001b[43mprintable_module_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mlayer\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/utils/generic_utils.py:354\u001b[0m, in \u001b[0;36mdeserialize_keras_object\u001b[0;34m(identifier, module_objects, custom_objects, printable_module_name)\u001b[0m\n\u001b[1;32m 351\u001b[0m custom_objects \u001b[38;5;241m=\u001b[39m custom_objects \u001b[38;5;129;01mor\u001b[39;00m {}\n\u001b[1;32m 353\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcustom_objects\u001b[39m\u001b[38;5;124m'\u001b[39m \u001b[38;5;129;01min\u001b[39;00m arg_spec\u001b[38;5;241m.\u001b[39margs:\n\u001b[0;32m--> 354\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_config\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 355\u001b[0m \u001b[43m \u001b[49m\u001b[43mcls_config\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 356\u001b[0m \u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mdict\u001b[39;49m\u001b[43m(\u001b[49m\n\u001b[1;32m 357\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mlist\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m_GLOBAL_CUSTOM_OBJECTS\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mitems\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\n\u001b[1;32m 358\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mlist\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mitems\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 359\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m CustomObjectScope(custom_objects):\n\u001b[1;32m 360\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39mfrom_config(cls_config)\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/engine/functional.py:668\u001b[0m, in \u001b[0;36mFunctional.from_config\u001b[0;34m(cls, config, custom_objects)\u001b[0m\n\u001b[1;32m 652\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m 653\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mfrom_config\u001b[39m(\u001b[38;5;28mcls\u001b[39m, config, custom_objects\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m):\n\u001b[1;32m 654\u001b[0m \u001b[38;5;124;03m\"\"\"Instantiates a Model from its config (output of `get_config()`).\u001b[39;00m\n\u001b[1;32m 655\u001b[0m \n\u001b[1;32m 656\u001b[0m \u001b[38;5;124;03m Arguments:\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 666\u001b[0m \u001b[38;5;124;03m ValueError: In case of improperly formatted config dict.\u001b[39;00m\n\u001b[1;32m 667\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m--> 668\u001b[0m input_tensors, output_tensors, created_layers \u001b[38;5;241m=\u001b[39m \u001b[43mreconstruct_from_config\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 669\u001b[0m \u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 670\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mcls\u001b[39m(inputs\u001b[38;5;241m=\u001b[39minput_tensors, outputs\u001b[38;5;241m=\u001b[39moutput_tensors,\n\u001b[1;32m 671\u001b[0m name\u001b[38;5;241m=\u001b[39mconfig\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mname\u001b[39m\u001b[38;5;124m'\u001b[39m))\n\u001b[1;32m 672\u001b[0m connect_ancillary_layers(model, created_layers)\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/engine/functional.py:1275\u001b[0m, in \u001b[0;36mreconstruct_from_config\u001b[0;34m(config, custom_objects, created_layers)\u001b[0m\n\u001b[1;32m 1273\u001b[0m \u001b[38;5;66;03m# First, we create all layers and enqueue nodes to be processed\u001b[39;00m\n\u001b[1;32m 1274\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m layer_data \u001b[38;5;129;01min\u001b[39;00m config[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mlayers\u001b[39m\u001b[38;5;124m'\u001b[39m]:\n\u001b[0;32m-> 1275\u001b[0m \u001b[43mprocess_layer\u001b[49m\u001b[43m(\u001b[49m\u001b[43mlayer_data\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1276\u001b[0m \u001b[38;5;66;03m# Then we process nodes in order of layer depth.\u001b[39;00m\n\u001b[1;32m 1277\u001b[0m \u001b[38;5;66;03m# Nodes that cannot yet be processed (if the inbound node\u001b[39;00m\n\u001b[1;32m 1278\u001b[0m \u001b[38;5;66;03m# does not yet exist) are re-enqueued, and the process\u001b[39;00m\n\u001b[1;32m 1279\u001b[0m \u001b[38;5;66;03m# is repeated until all nodes are processed.\u001b[39;00m\n\u001b[1;32m 1280\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m unprocessed_nodes:\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/engine/functional.py:1257\u001b[0m, in \u001b[0;36mreconstruct_from_config.<locals>.process_layer\u001b[0;34m(layer_data)\u001b[0m\n\u001b[1;32m 1253\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 1254\u001b[0m \u001b[38;5;66;03m# Instantiate layer.\u001b[39;00m\n\u001b[1;32m 1255\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtensorflow\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mpython\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mkeras\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mlayers\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m deserialize \u001b[38;5;28;01mas\u001b[39;00m deserialize_layer \u001b[38;5;66;03m# pylint: disable=g-import-not-at-top\u001b[39;00m\n\u001b[0;32m-> 1257\u001b[0m layer \u001b[38;5;241m=\u001b[39m \u001b[43mdeserialize_layer\u001b[49m\u001b[43m(\u001b[49m\u001b[43mlayer_data\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcustom_objects\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1258\u001b[0m created_layers[layer_name] \u001b[38;5;241m=\u001b[39m layer\n\u001b[1;32m 1260\u001b[0m node_count_by_layer[layer] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mint\u001b[39m(_should_skip_first_node(layer))\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/layers/serialization.py:173\u001b[0m, in \u001b[0;36mdeserialize\u001b[0;34m(config, custom_objects)\u001b[0m\n\u001b[1;32m 162\u001b[0m \u001b[38;5;124;03m\"\"\"Instantiates a layer from a config dictionary.\u001b[39;00m\n\u001b[1;32m 163\u001b[0m \n\u001b[1;32m 164\u001b[0m \u001b[38;5;124;03mArguments:\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 170\u001b[0m \u001b[38;5;124;03m Layer instance (may be Model, Sequential, Network, Layer...)\u001b[39;00m\n\u001b[1;32m 171\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 172\u001b[0m populate_deserializable_objects()\n\u001b[0;32m--> 173\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mgeneric_utils\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdeserialize_keras_object\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 174\u001b[0m \u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 175\u001b[0m \u001b[43m \u001b[49m\u001b[43mmodule_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mLOCAL\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mALL_OBJECTS\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 176\u001b[0m \u001b[43m \u001b[49m\u001b[43mcustom_objects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcustom_objects\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 177\u001b[0m \u001b[43m \u001b[49m\u001b[43mprintable_module_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mlayer\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/utils/generic_utils.py:360\u001b[0m, in \u001b[0;36mdeserialize_keras_object\u001b[0;34m(identifier, module_objects, custom_objects, printable_module_name)\u001b[0m\n\u001b[1;32m 354\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39mfrom_config(\n\u001b[1;32m 355\u001b[0m cls_config,\n\u001b[1;32m 356\u001b[0m custom_objects\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mdict\u001b[39m(\n\u001b[1;32m 357\u001b[0m \u001b[38;5;28mlist\u001b[39m(_GLOBAL_CUSTOM_OBJECTS\u001b[38;5;241m.\u001b[39mitems()) \u001b[38;5;241m+\u001b[39m\n\u001b[1;32m 358\u001b[0m \u001b[38;5;28mlist\u001b[39m(custom_objects\u001b[38;5;241m.\u001b[39mitems())))\n\u001b[1;32m 359\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m CustomObjectScope(custom_objects):\n\u001b[0;32m--> 360\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_config\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcls_config\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 361\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 362\u001b[0m \u001b[38;5;66;03m# Then `cls` may be a function returning a class.\u001b[39;00m\n\u001b[1;32m 363\u001b[0m \u001b[38;5;66;03m# in this case by convention `config` holds\u001b[39;00m\n\u001b[1;32m 364\u001b[0m \u001b[38;5;66;03m# the kwargs of the function.\u001b[39;00m\n\u001b[1;32m 365\u001b[0m custom_objects \u001b[38;5;241m=\u001b[39m custom_objects \u001b[38;5;129;01mor\u001b[39;00m {}\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/engine/base_layer.py:720\u001b[0m, in \u001b[0;36mLayer.from_config\u001b[0;34m(cls, config)\u001b[0m\n\u001b[1;32m 704\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m 705\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mfrom_config\u001b[39m(\u001b[38;5;28mcls\u001b[39m, config):\n\u001b[1;32m 706\u001b[0m \u001b[38;5;124;03m\"\"\"Creates a layer from its config.\u001b[39;00m\n\u001b[1;32m 707\u001b[0m \n\u001b[1;32m 708\u001b[0m \u001b[38;5;124;03m This method is the reverse of `get_config`,\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 718\u001b[0m \u001b[38;5;124;03m A layer instance.\u001b[39;00m\n\u001b[1;32m 719\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m--> 720\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/layers/pooling.py:862\u001b[0m, in \u001b[0;36mGlobalPooling2D.__init__\u001b[0;34m(self, data_format, **kwargs)\u001b[0m\n\u001b[1;32m 861\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, data_format\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs):\n\u001b[0;32m--> 862\u001b[0m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mGlobalPooling2D\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[38;5;21;43m__init__\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 863\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdata_format \u001b[38;5;241m=\u001b[39m conv_utils\u001b[38;5;241m.\u001b[39mnormalize_data_format(data_format)\n\u001b[1;32m 864\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39minput_spec \u001b[38;5;241m=\u001b[39m InputSpec(ndim\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m4\u001b[39m)\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/training/tracking/base.py:517\u001b[0m, in \u001b[0;36mno_automatic_dependency_tracking.<locals>._method_wrapper\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 515\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_self_setattr_tracking \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mFalse\u001b[39;00m \u001b[38;5;66;03m# pylint: disable=protected-access\u001b[39;00m\n\u001b[1;32m 516\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 517\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[43mmethod\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 518\u001b[0m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[1;32m 519\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_self_setattr_tracking \u001b[38;5;241m=\u001b[39m previous_value \u001b[38;5;66;03m# pylint: disable=protected-access\u001b[39;00m\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/engine/base_layer.py:340\u001b[0m, in \u001b[0;36mLayer.__init__\u001b[0;34m(self, trainable, name, dtype, dynamic, **kwargs)\u001b[0m\n\u001b[1;32m 329\u001b[0m allowed_kwargs \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m 330\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minput_dim\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[1;32m 331\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minput_shape\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 337\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mimplementation\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[1;32m 338\u001b[0m }\n\u001b[1;32m 339\u001b[0m \u001b[38;5;66;03m# Validate optional keyword arguments.\u001b[39;00m\n\u001b[0;32m--> 340\u001b[0m \u001b[43mgeneric_utils\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mvalidate_kwargs\u001b[49m\u001b[43m(\u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mallowed_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 342\u001b[0m \u001b[38;5;66;03m# Mutable properties\u001b[39;00m\n\u001b[1;32m 343\u001b[0m \u001b[38;5;66;03m# Indicates whether the layer's weights are updated during training\u001b[39;00m\n\u001b[1;32m 344\u001b[0m \u001b[38;5;66;03m# and whether the layer's updates are run during training.\u001b[39;00m\n\u001b[1;32m 345\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_trainable \u001b[38;5;241m=\u001b[39m trainable\n",
|
||||
"File \u001b[0;32m~/miniconda3/envs/tensorflow-cuda/lib/python3.9/site-packages/tensorflow/python/keras/utils/generic_utils.py:808\u001b[0m, in \u001b[0;36mvalidate_kwargs\u001b[0;34m(kwargs, allowed_kwargs, error_message)\u001b[0m\n\u001b[1;32m 806\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m kwarg \u001b[38;5;129;01min\u001b[39;00m kwargs:\n\u001b[1;32m 807\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m kwarg \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m allowed_kwargs:\n\u001b[0;32m--> 808\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m(error_message, kwarg)\n",
|
||||
"\u001b[0;31mTypeError\u001b[0m: ('Keyword argument not understood:', 'keepdims')"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"new_model = tf.keras.models.load_model('Model_1.h5')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "664cf629",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -11,16 +11,16 @@ training = curate.to_training(raw_data) # creates raw_df
|
||||
class_training = curate.class_training(training) # creates initial class_training df
|
||||
nvl_training = curate.nvl_training(training) # creates initial nvl_training
|
||||
dropd = curate.drop_nvl_cols(nvl_training) # label mask
|
||||
dropd
|
||||
|
||||
# pulls values out of lists for both dfs and creates temp_pics_source_list.txt
|
||||
expanded_dfs = curate.expand_nvlclass(class_training, dropd)
|
||||
expanded_dfs = curate.expand_nvlclass(class_training, dropd) # pulls values out of lists for both dfs
|
||||
|
||||
expanded_class = expanded_dfs[0] # TODO still having problems with Unnamed: 0 col
|
||||
expanded_dropd = expanded_dfs[1] # TODO incorrect df. Look at nvl_training func. Specifically "reindex" usage
|
||||
|
||||
download = input('download images?: ')
|
||||
if ('y' or 'Y') in download:
|
||||
curate.dl_pictures()
|
||||
with open('temp_pics_source_list.txt') as f:
|
||||
test_list = json.load(f)
|
||||
curate.dl_pictures(test_list)
|
||||
else:
|
||||
pass
|
||||
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
404
ebay_api.py
404
ebay_api.py
@ -16,151 +16,12 @@ import pandas as pd
|
||||
import config as cfg
|
||||
import shutil
|
||||
import re
|
||||
import urllib, base64
|
||||
|
||||
from ebaysdk.exception import ConnectionError
|
||||
from ebaysdk.trading import Connection as Trading
|
||||
from ebaysdk.finding import Connection as Finding
|
||||
from ebaysdk.shopping import Connection as Shopping
|
||||
|
||||
# renew oauth token for shopping api
|
||||
def getAuthToken():
|
||||
AppSettings = {
|
||||
'client_id': cfg.oauth["client_id"],
|
||||
'client_secret':cfg.oauth["client_secret"],
|
||||
'ruName':cfg.oauth["RuName"]
|
||||
}
|
||||
|
||||
authHeaderData = AppSettings['client_id'] + ':' + AppSettings['client_secret']
|
||||
encodedAuthHeader = base64.b64encode(str.encode(authHeaderData))
|
||||
encodedAuthHeader = str(encodedAuthHeader)[2:len(str(encodedAuthHeader))-1]
|
||||
|
||||
headers = {
|
||||
"Content-Type" : "application/x-www-form-urlencoded", # what is this?
|
||||
"Authorization" : "Basic " + str(encodedAuthHeader)
|
||||
}
|
||||
|
||||
body= {
|
||||
"grant_type" : "client_credentials",
|
||||
"redirect_uri" : AppSettings['ruName'],
|
||||
"scope" : "https://api.ebay.com/oauth/api_scope"
|
||||
}
|
||||
|
||||
data = urllib.parse.urlencode(body)
|
||||
|
||||
tokenURL = "https://api.ebay.com/identity/v1/oauth2/token"
|
||||
|
||||
response = requests.post(tokenURL, headers=headers, data=data).json()
|
||||
# error = response['error_description'] #if errors
|
||||
access_token = response['access_token']
|
||||
|
||||
with open('temp_oauth_token.txt', 'w') as f:
|
||||
json.dump(access_token, f)
|
||||
|
||||
return access_token
|
||||
|
||||
class FindingApi:
|
||||
'''
|
||||
Methods for accessing eBay's FindingApi services
|
||||
'''
|
||||
|
||||
def __init__(self, service):
|
||||
self.service = [
|
||||
'findItemsAdvanced', 'findCompletedItems',
|
||||
'findItemsByKeywords', 'findItemsIneBayStores', 'findItemsByCategory',
|
||||
'findItemsByProduct'
|
||||
][service] # Currently using only index 4, i.e., service = 4
|
||||
|
||||
def get_data(self, category_id):
|
||||
|
||||
'''
|
||||
Gets raw JSON data fom FindingApi service call. Currently being used to
|
||||
get itemIDs from categories;
|
||||
'''
|
||||
# startTime = dateutil.parser.isoparse( startTime )
|
||||
# now = datetime.datetime.now(tz=pytz.UTC)
|
||||
# days_on_site = (now - startTime).days # as int
|
||||
|
||||
ids = []
|
||||
params = {
|
||||
"OPERATION-NAME":self.service,
|
||||
"SECURITY-APPNAME":cfg.sec['SECURITY-APPNAME'],
|
||||
"SERVICE-VERSION":"1.13.0",
|
||||
"RESPONSE-DATA-FORMAT":"JSON",
|
||||
"categoryId":category_id,
|
||||
"paginationInput.entriesPerPage":"100",
|
||||
"paginationInput.PageNumber":"1",
|
||||
"itemFilter(0).name":"Condition",
|
||||
"itemFilter(0).value":"Used",
|
||||
"itemFilter.name":"HideDuplicateItems",
|
||||
"itemFilter.value":"true",
|
||||
"sortOrder":"StartTimeNewest",
|
||||
}
|
||||
# "itemFilter(1).name":"TopRatedSellerOnly", # TODO fix here
|
||||
# "itemFilter(1).value":"true"
|
||||
|
||||
try:
|
||||
response = requests.get("https://svcs.ebay.com/services/search/FindingService/v1",
|
||||
params=params, timeout=24)
|
||||
response.raise_for_status()
|
||||
|
||||
except requests.exceptions.RequestException: # appears this works need to be able to continue where you left off or use better timeout?
|
||||
print('connection error')
|
||||
return ids
|
||||
try:
|
||||
data = response.json()
|
||||
for item in data['findItemsByCategoryResponse'][0]['searchResult'][0]['item']:
|
||||
ids.append(item['itemId'][0])
|
||||
|
||||
ids = list(set(ids))
|
||||
|
||||
except (AttributeError, KeyError):
|
||||
print('AttributeError or KeyError. Exiting')
|
||||
print(response.json())
|
||||
return ids
|
||||
|
||||
return ids
|
||||
|
||||
# TODO add some other options to finding call api such as for possibly filtering for used items only. This might give you a better dataset for training. Or maybe a mixture of new and used. Maybe
|
||||
# try and come up with a way to mathematically determine your odds of maximizing the number of pictures in your training set while reducing the number of useless images. Say for example, if you took a
|
||||
# random set of 3 of 8 pictures total from each listing you might have a better chance of getting 3 good pictures in addition to increasing your training set. Or maybe you would have better luck with limiting
|
||||
# it to the first 5 pictures instead of random.
|
||||
|
||||
# You may even have more consistency with used shoes since they are "one-off" items without confusing multiple variations and colors. What else you can do is run small training sets on both new and used
|
||||
# to see which one is more accurate or if a combo of both is more accurate.
|
||||
|
||||
def get_ids_from_cats(self):
|
||||
'''
|
||||
Creates a 20-itemId list to use for the ShoppingApi
|
||||
call
|
||||
'''
|
||||
|
||||
ids = []
|
||||
|
||||
# load category id list
|
||||
with open('cat_list.txt') as jf:
|
||||
cat_list = json.load(jf)
|
||||
|
||||
# load list of master ids
|
||||
with open('master_ids.txt') as f:
|
||||
master_ids = json.load(f)
|
||||
|
||||
# fetch ids with calls to Finding Api given cats as param
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for future in executor.map(self.get_data, cat_list):
|
||||
ids.extend(future)
|
||||
|
||||
# append master ids list with temporary ids from single function call and save
|
||||
master_ids.extend(ids)
|
||||
master_ids = list(set(master_ids))
|
||||
with open('master_ids.txt', 'w') as f:
|
||||
json.dump(master_ids, f)
|
||||
|
||||
# 20-ItemID list created to maximize dataset/decrease calls provided call constraints
|
||||
twenty_id_list = [','.join(ids[n:n+20]) for n in list(range(0,
|
||||
len(ids), 20))]
|
||||
|
||||
return twenty_id_list, ids
|
||||
|
||||
class ShoppingApi:
|
||||
'''
|
||||
@ -168,14 +29,6 @@ class ShoppingApi:
|
||||
pandas dataframes
|
||||
'''
|
||||
|
||||
# def __init__(self):
|
||||
#
|
||||
# # renew oauth token
|
||||
# oauth_response = getAuthToken()
|
||||
# access_token = oauth_response[0]
|
||||
#
|
||||
# self.access_token = access_token
|
||||
|
||||
def update_cats(self):
|
||||
'''
|
||||
Updates cat_list.txt
|
||||
@ -184,19 +37,19 @@ class ShoppingApi:
|
||||
parent_cats = ['3034', '93427'] # Women's and Men's shoe departments
|
||||
cat_list = []
|
||||
|
||||
with open('temp_oauth_token.txt') as f:
|
||||
access_token = json.load(f)
|
||||
for department in parent_cats:
|
||||
|
||||
headers = {
|
||||
"X-EBAY-API-IAF-TOKEN":access_token,
|
||||
params = {
|
||||
"callname":"GetCategoryInfo",
|
||||
"X-EBAY-API-IAF-TOKEN":cfg.sec['X-EBAY-API-IAF-TOKEN'],
|
||||
"version":"671",
|
||||
"responseencoding":"JSON",
|
||||
"CategoryID":department,
|
||||
"IncludeSelector":"ChildCategories",
|
||||
}
|
||||
|
||||
url = "https://open.api.ebay.com/shopping?&callname=GetCategoryInfo&responseencoding=JSON&IncludeSelector=ChildCategories&CategoryID="+department
|
||||
|
||||
try:
|
||||
response = requests.get(url, headers=headers, timeout=4)
|
||||
response = requests.get("https://open.api.ebay.com/shopping?", params=params, timeout=4)
|
||||
response.raise_for_status()
|
||||
|
||||
except requests.exceptions.RequestException:
|
||||
@ -206,52 +59,49 @@ class ShoppingApi:
|
||||
response = response['CategoryArray']['Category'][1:] # excludes index 0 as this is parent node, i.e., women's or men's dept.
|
||||
|
||||
temp_cat_list = [cat['CategoryID'] for cat in response]
|
||||
|
||||
if department == '3034':
|
||||
women_cats = temp_cat_list
|
||||
elif department == '93427':
|
||||
men_cats = temp_cat_list
|
||||
|
||||
cat_list.extend(temp_cat_list)
|
||||
|
||||
with open('cat_list.txt', 'w') as f:
|
||||
json.dump(cat_list, f)
|
||||
with open('women_cat_list.txt', 'w') as f:
|
||||
json.dump(women_cats, f)
|
||||
with open('men_cat_list.txt', 'w') as f:
|
||||
json.dump(men_cats, f)
|
||||
with open('cat_list.txt', 'w') as f:
|
||||
json.dump(cat_list, f)
|
||||
|
||||
# leaf_list = [node['LeafCategory'] for node in response]
|
||||
|
||||
def get_item_from_findItemsByCategory(self, twenty_id):
|
||||
|
||||
'''
|
||||
Gets raw JSON data from multiple live listings given multiple itemIds
|
||||
Gets raw JSON data from multiple live listings given multiple itemIds
|
||||
'''
|
||||
with open('temp_oauth_token.txt') as f:
|
||||
access_token = json.load(f)
|
||||
|
||||
with open('item_id_results.txt') as f:
|
||||
item_id_results = json.load(f)
|
||||
|
||||
headers = {
|
||||
"X-EBAY-API-IAF-TOKEN":access_token,
|
||||
"X-EBAY-API-IAF-TOKEN":cfg.sec['X-EBAY-API-IAF-TOKEN'], # TODO implement auto oauth token renewal
|
||||
"version":"671",
|
||||
}
|
||||
|
||||
url = "https://open.api.ebay.com/shopping?&callname=GetMultipleItems&responseencoding=JSON&IncludeSelector=ItemSpecifics&ItemID="+twenty_id
|
||||
|
||||
try:
|
||||
# random sleep here between 0 and 10 secs?
|
||||
|
||||
sleep(randint(1,10)) # may not be necessary
|
||||
response = requests.get(url, headers=headers,timeout=24)
|
||||
response.raise_for_status()
|
||||
response = response.json()
|
||||
item = response['Item']
|
||||
response = response['Item']
|
||||
print('index number {}'.format(item_id_results.index(twenty_id)))
|
||||
print(response)
|
||||
|
||||
|
||||
except (requests.exceptions.RequestException, KeyError):
|
||||
print('connection error. IP limit possibly exceeded')
|
||||
print(response)
|
||||
return # this returns NoneType. Handled at conky()
|
||||
print('index number {}'.format(item_id_results.index(twenty_id)))
|
||||
return # returns NoneType. Handle at conky()
|
||||
|
||||
return item
|
||||
return response
|
||||
|
||||
def conky(self, twenty_ids_list):
|
||||
def conky(self):
|
||||
'''
|
||||
Runs get_item_from_findItemsByCategory in multiple threads to get relevant
|
||||
data for creating training sets
|
||||
@ -262,17 +112,23 @@ class ShoppingApi:
|
||||
except (FileNotFoundError, ValueError):
|
||||
data = []
|
||||
|
||||
try:
|
||||
with open('item_id_results.txt') as f:
|
||||
item_id_results = json.load(f)
|
||||
|
||||
except (FileNotFoundError, ValueError):
|
||||
item_id_results = scrape_ids.main()
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for future in executor.map(self.get_item_from_findItemsByCategory, twenty_ids_list):
|
||||
for future in executor.map(self.get_item_from_findItemsByCategory, item_id_results):
|
||||
if future is not None:
|
||||
for item in future:
|
||||
data.append(item) # The end result should be a list of dicts where each dict in the list is a listing
|
||||
else:
|
||||
print('response is None')
|
||||
print('reached call limit')
|
||||
break
|
||||
with open('raw_data.txt', 'w') as f:
|
||||
json.dump(data, f)
|
||||
return data
|
||||
|
||||
# NOTE:
|
||||
|
||||
@ -406,106 +262,74 @@ class CurateData:
|
||||
'''
|
||||
expand = input("expand image list or use primary listing image? (y or n): ")
|
||||
if ('y' or 'Y') in expand:
|
||||
count = input('how many images? All [A] or the first <n> images?')
|
||||
if 'A' in count:
|
||||
expanded_class = class_training.explode('PictureURL').reset_index(drop=True)
|
||||
expanded_class = expanded_class.dropna(subset=['PictureURL'])
|
||||
expanded_class = expanded_class.drop_duplicates(subset=['PictureURL']).reset_index(drop=True)
|
||||
expanded_class = class_training.explode('PictureURL').reset_index(drop=True)
|
||||
expanded_class = expanded_class.dropna(subset=['PictureURL'])
|
||||
expanded_class = expanded_class.drop_duplicates(subset=['PictureURL']).reset_index(drop=True)
|
||||
|
||||
expanded_dropd = dropd.explode('PictureURL').reset_index(drop=True)
|
||||
expanded_dropd = expanded_dropd.dropna(subset=['PictureURL'])
|
||||
expanded_dropd = expanded_dropd.drop_duplicates(subset=['PictureURL']).reset_index(drop=True)
|
||||
|
||||
expanded_dropd = self.extract_df(expanded_dropd) # convert lists to values
|
||||
|
||||
temp_pics_source_list = list(set(expanded_class.PictureURL.to_list()))
|
||||
else:
|
||||
count = int(count)
|
||||
class_training['PictureURL'] = class_training['PictureURL'].apply(lambda x: x[0:count] if len(x)>0 else np.nan)
|
||||
expanded_class = class_training.explode('PictureURL').reset_index(drop=True)
|
||||
expanded_class = expanded_class.dropna(subset=['PictureURL'])
|
||||
expanded_class = expanded_class.drop_duplicates(subset=['PictureURL']).reset_index(drop=True)
|
||||
|
||||
dropd = dropd.dropna(subset=['PictureURL'])
|
||||
dropd['PictureURL'] = dropd['PictureURL'].apply(lambda x: x[0:count] if len(x)>0 else np.nan)
|
||||
expanded_dropd = dropd.explode('PictureURL').reset_index(drop=True)
|
||||
expanded_dropd = expanded_dropd.dropna(subset=['PictureURL'])
|
||||
|
||||
expanded_dropd = self.extract_df(expanded_dropd) # convert lists to values
|
||||
|
||||
# retrieves picture URLs from master raw_data.txt and rewrites temp_pics_source_list.txt
|
||||
temp_pics_source_list = list(set(expanded_class.PictureURL.to_list())) # TODO
|
||||
|
||||
else:
|
||||
class_training['PictureURL'] = class_training['PictureURL'].apply(lambda x: x[0] if len(x)>0 else np.nan)
|
||||
expanded_class = class_training.dropna()
|
||||
dropd = dropd.dropna(subset=['PictureURL'])
|
||||
dropd['PictureURL'] = dropd['PictureURL'].apply(lambda x: x[0] if len(x)>0 else np.nan)
|
||||
dropd = dropd.dropna(subset=['PictureURL'])
|
||||
expanded_dropd = dropd
|
||||
expanded_dropd = dropd.explode('PictureURL').reset_index(drop=True)
|
||||
expanded_dropd = expanded_dropd.dropna(subset=['PictureURL'])
|
||||
expanded_dropd = expanded_dropd.drop_duplicates(subset=['PictureURL']).reset_index(drop=True)
|
||||
|
||||
expanded_dropd = self.extract_df(expanded_dropd) # convert lists to values
|
||||
|
||||
# retrieves picture URLs from master raw_data.txt and rewrites temp_pics_source_list.txt
|
||||
temp_pics_source_list = list(set(expanded_class.PictureURL.to_list())) # TODO because var is del after dl_pictures you may be
|
||||
# getting duplicate pictures. ie, expanded_class.PictureURL is a master series and will write temp_pics_source_list as such
|
||||
# giving you many repeated pictureURLs (they will not get downloaded due to check @ dl_pic but checking will cont to grow in
|
||||
# computate power reqs. So, figure out a way to make a true temp list based on the current call executed
|
||||
|
||||
else:
|
||||
class_training['PictureURL'] = class_training['PictureURL'].apply(lambda x: x[0])
|
||||
expanded_class = class_training
|
||||
dropd['PictureURL'] = dropd['PictureURL'].apply(lambda x: x[0])
|
||||
expanded_dropd = dropd
|
||||
|
||||
expanded_dropd = self.extract_df(expanded_dropd) # convert lists to values
|
||||
temp_pics_source_list = list(set(expanded_class.PictureURL.to_list()))
|
||||
|
||||
try:
|
||||
with open('temp_pics_source_list.txt') as f:
|
||||
tpsl = json.load(f)
|
||||
tpsl.extend(temp_pics_source_list)
|
||||
|
||||
# ensures no duplicate source URLs exist
|
||||
temp_pics_source_list = list(set(tpsl))
|
||||
with open('temp_pics_source_list.txt', 'w') as f:
|
||||
json.dump(temp_pics_source_list, f)
|
||||
|
||||
# creates file if script is ran for 1st time and file not present
|
||||
except (ValueError, FileNotFoundError):
|
||||
with open('temp_pics_source_list.txt', 'w') as f:
|
||||
json.dump(temp_pics_source_list, f)
|
||||
|
||||
# Append to master training dataframes, drop potential dupes and save
|
||||
|
||||
expanded_class.to_csv('expanded_class.csv')
|
||||
# expanded_class = pd.read_csv('expanded_class.csv', index_col=0)
|
||||
# expanded_class.drop_duplicates(subset=['PictureURL']).reset_index(drop=True)
|
||||
# expanded_class.to_csv('expanded_class.csv', mode='a', encoding='utf-8') # TODO see line 235 about views and copies
|
||||
|
||||
expanded_dropd.to_csv('expanded_dropd.csv')
|
||||
# expanded_dropd = pd.read_csv('expanded_dropd.csv', index_col=0)
|
||||
# expanded_dropd.drop_duplicates(subset=['PictureURL']).reset_index(drop=True)
|
||||
# expanded_dropd.to_csv('expanded_dropd.csv', mode='a', encoding='utf-8')
|
||||
|
||||
return expanded_class, expanded_dropd
|
||||
|
||||
def dl_pic(self,dict_pics, pic):
|
||||
def dl_pictures(self, *args):
|
||||
'''
|
||||
Downloads pictures from api to local storage using temp_pics_source_list
|
||||
and creates custom {source:target} dictionary as dict_pics
|
||||
'''
|
||||
|
||||
try:
|
||||
|
||||
# check if image exists in current working directory. avoids dupes
|
||||
if os.path.exists(dict_pics[pic]):
|
||||
pass
|
||||
|
||||
else:
|
||||
|
||||
try:
|
||||
|
||||
r = requests.get(pic, stream=True)
|
||||
r.raw.decode_content = True
|
||||
with open(dict_pics[pic], 'wb') as f:
|
||||
shutil.copyfileobj(r.raw, f)
|
||||
|
||||
except ConnectionError:
|
||||
return
|
||||
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
def dict_pics(self):
|
||||
# TODO add option to include only first image of each listing as
|
||||
# others may be crappy for training. Also consider adding option to
|
||||
# reduce the size of each pic downloaded
|
||||
|
||||
try:
|
||||
with open('target_dirs.txt', 'r+') as f: # TODO you can add option to change directory here, too. Look up how to have optional arguments
|
||||
target_dir = json.load(f)
|
||||
|
||||
except (ValueError, FileNotFoundError):
|
||||
target_dir = input('No target dirctory found. Create One? [y] or [n]:')
|
||||
if target_dir == ('y' or 'Y'):
|
||||
target_dir = input('Please provide full URL to destination folder:') # TODO need to catch human syntax errors here
|
||||
with open('target_dirs.txt','w') as f:
|
||||
json.dump(target_dir, f)
|
||||
|
||||
else:
|
||||
os.mkdir(os.getcwd()+os.sep+'training_images')
|
||||
target_dir = os.getcwd()+os.sep+'training_images'
|
||||
@ -513,59 +337,58 @@ class CurateData:
|
||||
json.dump(target_dir, f)
|
||||
print('Creating default folder in current directory @ ' + target_dir)
|
||||
|
||||
# open url list in working directory
|
||||
with open('temp_pics_source_list.txt') as f:
|
||||
|
||||
try:
|
||||
temp_pics_source_list = json.load(f)
|
||||
|
||||
except (ValueError, FileNotFoundError):
|
||||
print('url list not found. aborting')
|
||||
return
|
||||
|
||||
dict_pics = {}
|
||||
|
||||
# make custom dict, {source:target}, and name images from unique URL patt
|
||||
for k in temp_pics_source_list:
|
||||
try:
|
||||
patt_1 = re.search(r'[^/]+(?=/\$_|.(\.jpg|\.jpeg|\.png))', k, re.IGNORECASE)
|
||||
patt_2 = re.search(r'(\.jpg|\.jpeg|\.png)', k, re.IGNORECASE)
|
||||
if patt_1 and patt_2 is not None:
|
||||
tag = patt_1.group() + patt_2.group().lower()
|
||||
file_name = target_dir + os.sep + tag
|
||||
dict_pics.update({k:file_name})
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
with open('dict_pics.txt', 'w') as f:
|
||||
json.dump(dict_pics, f)
|
||||
|
||||
return dict_pics # TODO still need to find sol to outliers (aka, naming scheme for unusual source URLs)
|
||||
|
||||
def dl_pictures(self, *dict_pics):
|
||||
'''
|
||||
Downloads pictures from api to local storage using temp_pics_source_list
|
||||
and dict_pics
|
||||
'''
|
||||
|
||||
if not dict_pics:
|
||||
dict_pics = self.dict_pics()
|
||||
|
||||
with open('temp_pics_source_list.txt') as f:
|
||||
try:
|
||||
temp_pics_source_list = json.load(f)
|
||||
if args:
|
||||
temp_pics_source_list = args[0]
|
||||
else:
|
||||
temp_pics_source_list = json.load(f)
|
||||
except (ValueError, FileNotFoundError):
|
||||
print('url list not found. download aborted')
|
||||
return
|
||||
if args:
|
||||
temp_pics_sources_list = args[0]
|
||||
else:
|
||||
print('url list not found. download aborted')
|
||||
return
|
||||
|
||||
temp_dict_pics = {k:target_dir + os.sep + re.search(r'[^/]+(?=/\$_|.jpg)', k, re.IGNORECASE).group() + '.jpg' for k in temp_pics_source_list}
|
||||
|
||||
try:
|
||||
with open('dict_pics.txt') as f:
|
||||
dict_pics = json.load(f)
|
||||
dict_pics.update(temp_dict_pics) # TODO This still creates duplicates
|
||||
with open('dict_pics.txt', 'w') as f:
|
||||
json.dump(dict_pics, f)
|
||||
|
||||
except (ValueError, FileNotFoundError):
|
||||
with open('dict_pics.txt', 'w') as f:
|
||||
json.dump(temp_dict_pics, f)
|
||||
dict_pics = temp_dict_pics
|
||||
|
||||
def dl_pic(dict_pics, pic):
|
||||
|
||||
if os.path.exists(dict_pics[pic]): # or call temp_dict_pics[pic] can work
|
||||
pass # TODO This is not catching duplicates for some reason....possibly not? Upon inspection, files aren't duplicates...but why?
|
||||
#TODO it would mean that temp_pics_source_list is changing for some reason?
|
||||
|
||||
else:
|
||||
try:
|
||||
r = requests.get(pic, stream=True)
|
||||
r.raw.decode_content = True
|
||||
with open(temp_dict_pics[pic], 'wb') as f: # Or call dict_pics[pic] can work
|
||||
shutil.copyfileobj(r.raw, f)
|
||||
except ConnectionError:
|
||||
return
|
||||
|
||||
bargs = [(dict_pics, pic) for pic in temp_pics_source_list]
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for future in executor.map(lambda p: self.dl_pic(*p), bargs):
|
||||
for future in executor.map(lambda p: dl_pic(*p), bargs):
|
||||
if future is not None:
|
||||
future
|
||||
else:
|
||||
print('connection error')
|
||||
|
||||
os.remove('temp_pics_source_list.txt') # Deletes file after downloads complete successfully
|
||||
|
||||
class PreProcessing:
|
||||
'''
|
||||
Includes methods for pre-processing training set input and labels in the
|
||||
@ -577,18 +400,13 @@ class PreProcessing:
|
||||
splits, etc.
|
||||
'''
|
||||
|
||||
def dict_pics(self):
|
||||
def stt_training(self, dict_pics, expanded_class, expanded_dropd):
|
||||
'''
|
||||
Source to target training. Replaces source image URL with target URL
|
||||
determined by values in dict_pics variable.
|
||||
'''
|
||||
|
||||
target_dir = os.getcwd()
|
||||
with open('temp_pics_source_list.txt') as f:
|
||||
temp_pics_source_list = json.load(f)
|
||||
dict_pics = {k:target_dir + os.sep + re.search(r'[^/]+(?=/\$_|.jpg)', k, re.IGNORECASE).group() + '.jpg' for k in temp_pics_source_list}
|
||||
print("{source:target} dictionary created @ " + os.getcwd() + os.sep + 'training_images')
|
||||
return dict_pics
|
||||
pass
|
||||
|
||||
# TODO pipeline gameplan: 5 files: dict_pics.txt,raw_json.txt, raw_json.csv, expanded_class.csv, expanded_dropd.csv
|
||||
# cont... open raw_json.txt and append, same with csv --> process new data --> pull out image source+dest and expand new dfs for the additional pictures
|
||||
|
@ -1,28 +0,0 @@
|
||||
import os
|
||||
import PIL
|
||||
from pathlib import Path
|
||||
from PIL import UnidentifiedImageError, Image
|
||||
|
||||
'''
|
||||
Since PIL is used in keras to open images, you need to identify and remove
|
||||
faulty images to avoid hiccups in training. When these are removed from their
|
||||
parent folders, their corresponding row in the dataframe should also be removed.
|
||||
But because the dataframe is constructed as such:
|
||||
|
||||
'''
|
||||
def faulty_images():
|
||||
path = Path("training_images").rglob("*.jpg")
|
||||
for img_p in path:
|
||||
try:
|
||||
img = PIL.Image.open(img_p)
|
||||
except PIL.UnidentifiedImageError:
|
||||
os.remove(img_p)
|
||||
# print(img_p + "Removed")
|
||||
# remove from folder, dataset(is constructed from the csv files
|
||||
# ), dict_pics, temp_pics_source_list,
|
||||
# expanded_dropd, expanded_class. But, remember that if you run curate.py
|
||||
# again the same faulty images will be recreated since it's still in
|
||||
# the raw_data.txt file
|
||||
|
||||
if __name__=="__main__":
|
||||
faulty_images()
|
103
rand_revise.py
103
rand_revise.py
@ -1,103 +0,0 @@
|
||||
import ebaysdk
|
||||
import json
|
||||
import requests
|
||||
import random
|
||||
from ebaysdk.trading import Connection as Trading
|
||||
from ebaysdk.finding import Connection as Finding
|
||||
from ebaysdk.shopping import Connection as Shopping
|
||||
import concurrent.futures
|
||||
import config as cfg
|
||||
import store_ids
|
||||
import ebay_api
|
||||
|
||||
tapi = Trading(config_file='ebay.yaml')
|
||||
|
||||
|
||||
def revised_price(id, original_prices):
|
||||
percent = (random.randint(95, 105))/100
|
||||
rev_price = original_prices[id]*percent
|
||||
rev_price = str(round(rev_price, 2))
|
||||
return rev_price
|
||||
|
||||
def revise_item(id, rev_price):
|
||||
response = tapi.execute(
|
||||
'ReviseItem', {
|
||||
'item': {
|
||||
'ItemID': id,
|
||||
'StartPrice':rev_price
|
||||
}
|
||||
|
||||
}
|
||||
)
|
||||
|
||||
def revise_items():
|
||||
with open('original_prices.txt') as f:
|
||||
original_prices = json.load(f)
|
||||
|
||||
for id in original_prices:
|
||||
rev_price = revised_price(id, original_prices)
|
||||
revise_item(id, rev_price)
|
||||
|
||||
def get_prices(twenty_id):
|
||||
|
||||
'''
|
||||
Gets raw JSON data from multiple live listings given multiple itemIds
|
||||
'''
|
||||
|
||||
with open('temp_oauth_token.txt') as f:
|
||||
access_token = json.load(f)
|
||||
|
||||
headers = {
|
||||
"X-EBAY-API-IAF-TOKEN":access_token,
|
||||
"version":"671",
|
||||
}
|
||||
|
||||
url = "https://open.api.ebay.com/shopping?&callname=GetMultipleItems&responseencoding=JSON&ItemID="+twenty_id
|
||||
|
||||
try:
|
||||
|
||||
response = requests.get(url, headers=headers,timeout=24)
|
||||
response.raise_for_status()
|
||||
response = response.json()
|
||||
item = response['Item']
|
||||
|
||||
|
||||
except (requests.exceptions.RequestException, KeyError):
|
||||
print('connection error. IP limit possibly exceeded')
|
||||
print(response)
|
||||
return # this returns NoneType. Handled at get_prices_thread
|
||||
|
||||
id_price_dict = {item['ItemID']:item['ConvertedCurrentPrice']['Value'] for item in item}
|
||||
return id_price_dict
|
||||
|
||||
def get_prices_thread(twenty_ids_list):
|
||||
'''
|
||||
Runs get_prices in multiple threads
|
||||
'''
|
||||
|
||||
id_price_dict = {}
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for future in executor.map(get_prices, twenty_ids_list):
|
||||
if future is not None:
|
||||
id_price_dict.update(future)
|
||||
else:
|
||||
print('response is None')
|
||||
break
|
||||
return id_price_dict
|
||||
|
||||
def main():
|
||||
|
||||
with open('ebay_ids.txt') as f:
|
||||
ids = json.load(f)
|
||||
|
||||
twenty_id_list = [','.join(ids[n:n+20]) for n in list(range(0,
|
||||
len(ids), 20))]
|
||||
|
||||
ids = store_ids.main() # gets your store ids for all listings
|
||||
ebay_api.getAuthToken() # updates your Oauth Token
|
||||
id_price_dict = get_prices_thread(twenty_id_list)
|
||||
return id_price_dict
|
||||
|
||||
if __name__=="__main__":
|
||||
main()
|
@ -36,15 +36,15 @@ def get_isurl(category_id): # "get itemSearchURL"
|
||||
return url
|
||||
try:
|
||||
data = response.json()
|
||||
print(data)
|
||||
# NOTE approx 220 pages of listings per cat @ 35 items per page
|
||||
item_cond = "&rt=nc&LH_ItemCondition=3000&mag=1" # preowned
|
||||
item_cond_new = '&LH_ItemCondition=3'
|
||||
urls = []
|
||||
base_url = data['findItemsByCategoryResponse'][0]['itemSearchURL'][0]
|
||||
for pg in list(range(1,34)): # No results after around page 32
|
||||
url = base_url+"&_pgn="+str(pg)+item_cond
|
||||
print(url)
|
||||
url = data['findItemsByCategoryResponse'][0]['itemSearchURL'][0]
|
||||
url = url+item_cond
|
||||
j = list(range(1,221))
|
||||
for i in j:
|
||||
pg = "&_pgn={}".format(str(i))
|
||||
url = url.replace('&_pgn=1', pg)
|
||||
urls.append(url)
|
||||
|
||||
except (AttributeError, KeyError):
|
||||
@ -70,22 +70,17 @@ def get_ids(url):
|
||||
'''
|
||||
html = requests.get(url).text
|
||||
soup = b(html, "html.parser")
|
||||
print(soup)
|
||||
ids = list(soup.find_all(href=re.compile(r"[\d]+(?=\?hash)")))
|
||||
ids = [id['href'] for id in ids]
|
||||
ids = [re.findall(r"[\d]+(?=\?)", id)[0] for id in ids]
|
||||
print(ids)
|
||||
ids = list(set(ids)) # necessary; two links are returned with pattern match
|
||||
|
||||
return ids
|
||||
|
||||
def threaded_get_ids(urls):
|
||||
'''
|
||||
Runs get_ids() w/in ThreadPoolExecutor() for multi threaded requests.
|
||||
Constructs and saves unique ids and 20_itemIDs for use with ebay_api
|
||||
methods
|
||||
'''
|
||||
|
||||
try:
|
||||
with open('ids.txt') as f:
|
||||
with open('item_id_results.txt') as f:
|
||||
ids = json.load(f)
|
||||
except FileNotFoundError:
|
||||
ids = []
|
||||
@ -94,32 +89,14 @@ def threaded_get_ids(urls):
|
||||
for future in executor.map(get_ids, urls):
|
||||
ids.extend(future)
|
||||
|
||||
ids = list(set(ids)) # necessary; two links are returned with pattern match
|
||||
item_id_results = [','.join(ids[n:n+20]) for n in list(range(0,
|
||||
len(ids), 20))] # 20-ItemID list created to maximize dataset/decrease calls given call constraints
|
||||
|
||||
with open('ids.txt', 'w') as f:
|
||||
json.dump(ids,f)
|
||||
|
||||
with open('item_id_results.txt', 'w') as f:
|
||||
json.dump(item_id_results, f)
|
||||
|
||||
return item_id_results
|
||||
|
||||
def id_count():
|
||||
'''
|
||||
Counts Unique IDs of item_id_results for testing
|
||||
'''
|
||||
with open('item_id_results.txt') as f:
|
||||
item_id_results = json.load(f)
|
||||
|
||||
ids = ','.join(item_id_results)
|
||||
ids = ids.split(',')
|
||||
uniq = len(list(set(ids)))
|
||||
print('{} Unique IDs'.format(uniq))
|
||||
|
||||
return ids
|
||||
|
||||
def main():
|
||||
urls = threaded_urls()
|
||||
item_id_results = threaded_get_ids(urls)
|
||||
|
7
shopping.py
Normal file
7
shopping.py
Normal file
@ -0,0 +1,7 @@
|
||||
'''
|
||||
Initial download and write of raw data from ebay
|
||||
'''
|
||||
import ebay_api
|
||||
|
||||
shopping = ebay_api.ShoppingApi()
|
||||
data = shopping.conky()
|
70
store_ids.py
70
store_ids.py
@ -1,70 +0,0 @@
|
||||
import os
|
||||
import requests
|
||||
import json
|
||||
import ebaysdk
|
||||
from ebaysdk.trading import Connection as Trading
|
||||
from ebaysdk.finding import Connection as Finding
|
||||
import time
|
||||
import concurrent.futures
|
||||
# (categoryId = women's shoes = 3034)
|
||||
# Initialize loop to get number of pages needed in for loop
|
||||
start = time.time()
|
||||
fapi = Finding(config_file = "ebay.yaml")
|
||||
tapi = Trading(config_file = 'ebay.yaml')
|
||||
|
||||
fresponse = fapi.execute(
|
||||
'findItemsAdvanced',
|
||||
{
|
||||
'itemFilter':{
|
||||
'name':'Seller',
|
||||
'value':'chesshoebuddy'
|
||||
},
|
||||
'paginationInput':{
|
||||
'entriesPerPage':'100',
|
||||
'pageNumber':'1'
|
||||
}
|
||||
}
|
||||
).dict()
|
||||
|
||||
page_results = int(fresponse['paginationOutput']['totalPages'])
|
||||
|
||||
pages = []
|
||||
for i in range(0, page_results):
|
||||
i += 1
|
||||
pages.append(i)
|
||||
|
||||
''' Begin definitions for getting ItemIds and SKU: '''
|
||||
|
||||
def id_up(n):
|
||||
ids = []
|
||||
fresponse = fapi.execute(
|
||||
'findItemsAdvanced',
|
||||
{
|
||||
'itemFilter':{
|
||||
'name':'Seller',
|
||||
'value':'chesshoebuddy'
|
||||
},
|
||||
'paginationInput':{
|
||||
'entriesPerPage':'100',
|
||||
'pageNumber':str(n)
|
||||
}
|
||||
}
|
||||
).dict()
|
||||
for item in (fresponse['searchResult']['item']):
|
||||
itemID = item['itemId']
|
||||
#response = tapi.execute('GetItem',{'ItemID':itemID}).dict()
|
||||
ids.append(itemID)
|
||||
return ids
|
||||
|
||||
def main():
|
||||
ids = []
|
||||
skus = []
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for future in executor.map(id_up, pages):
|
||||
ids.extend(future)
|
||||
|
||||
with open('ebay_ids.txt', 'w') as outfile:
|
||||
json.dump(ids, outfile)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
124
testing.ipynb
124
testing.ipynb
@ -1,124 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "7eea0d4d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Num GPUs Available: 2\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import tensorflow as tf\n",
|
||||
"print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "33d18ebd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2 Physical GPU, 3 Logical GPUs\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"gpus = tf.config.list_physical_devices('GPU')\n",
|
||||
"if gpus:\n",
|
||||
" # Create 2 virtual GPUs with 1GB memory each\n",
|
||||
" try:\n",
|
||||
" tf.config.set_logical_device_configuration(\n",
|
||||
" gpus[0],\n",
|
||||
" [tf.config.LogicalDeviceConfiguration(memory_limit=1024),\n",
|
||||
" tf.config.LogicalDeviceConfiguration(memory_limit=1024)])\n",
|
||||
" logical_gpus = tf.config.list_logical_devices('GPU')\n",
|
||||
" print(len(gpus), \"Physical GPU,\", len(logical_gpus), \"Logical GPUs\")\n",
|
||||
" except RuntimeError as e:\n",
|
||||
" # Virtual devices must be set before GPUs have been initialized\n",
|
||||
" print(e)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2b9ca96e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"tf.Tensor(\n",
|
||||
"[[22. 28.]\n",
|
||||
" [49. 64.]], shape=(2, 2), dtype=float32)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"tf.debugging.set_log_device_placement(True)\n",
|
||||
"\n",
|
||||
"a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n",
|
||||
"b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n",
|
||||
"\n",
|
||||
"# Run on the GPU\n",
|
||||
"c = tf.matmul(a, b)\n",
|
||||
"print(c)\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from keras.models import load_model\n",
|
||||
"\n",
|
||||
"# returns a compiled model\n",
|
||||
"# identical to the previous one\n",
|
||||
"model = load_model('Model_1.h5')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model.predict_generator()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,209 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "a43c3ccb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import torch\n",
|
||||
"import torchvision.models as models\n",
|
||||
"import pandas as pd\n",
|
||||
"from torch.utils.data import Dataset, DataLoader\n",
|
||||
"from torchvision import transforms, utils\n",
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"from matplotlib.image import imread\n",
|
||||
"import pandas as pd\n",
|
||||
"from collections import Counter\n",
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
"import re\n",
|
||||
"import tempfile\n",
|
||||
"from os.path import exists\n",
|
||||
"from PIL import ImageFile\n",
|
||||
"import sklearn as sk\n",
|
||||
"from sklearn.model_selection import train_test_split, StratifiedShuffleSplit\n",
|
||||
"import image_faults"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "6c7577a6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"2"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"torch.cuda.device_count()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "c7e9b947",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"resnet18 = models.resnet18(pretrained=True)\n",
|
||||
"vgg16 = models.vgg16(pretrained=True)\n",
|
||||
"inception = models.inception_v3(pretrained=True)\n",
|
||||
"resnext50_32x4d = models.resnext50_32x4d(pretrained=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "eabc61b2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class Shoes(Dataset):\n",
|
||||
" def __init__(self, csvfile, root_dir, transform=None):\n",
|
||||
" self.shoes_df = pd.read_csv(csvfile)\n",
|
||||
" self.root_dir = root_dir\n",
|
||||
" self.transform = transform\n",
|
||||
" \n",
|
||||
" def __getitem__(self, index):\n",
|
||||
" self.shoes_df.iloc[index]\n",
|
||||
" \n",
|
||||
" \n",
|
||||
" def __getitem__(self, idx):\n",
|
||||
" if torch.is_tensor(idx):\n",
|
||||
" idx = idx.tolist()\n",
|
||||
"\n",
|
||||
" img_name = os.path.join(self.root_dir,\n",
|
||||
" self.data.iloc[idx, 0])\n",
|
||||
" image = io.imread(img_name)\n",
|
||||
" data = self.data.iloc[idx, 1:]\n",
|
||||
" data = np.array([data])\n",
|
||||
" data = data.astype('float').reshape(-1, 2)\n",
|
||||
" sample = {'image': image, 'landmarks': data}\n",
|
||||
"\n",
|
||||
" if self.transform:\n",
|
||||
" sample = self.transform(sample)\n",
|
||||
"\n",
|
||||
" return sample\n",
|
||||
" \n",
|
||||
" def __len__(self):\n",
|
||||
" return len(self.data)\n",
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "a0fc66b0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"something = pd.read_csv('expanded_class.csv')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "ed2aceeb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def dict_pics_jup():\n",
|
||||
" '''\n",
|
||||
" {source:target} dict used to replace source urls with image location as input\n",
|
||||
" '''\n",
|
||||
" target_dir = os.getcwd() + os.sep + \"training_images\"\n",
|
||||
" with open('temp_pics_source_list.txt') as f:\n",
|
||||
" temp_pics_source_list = json.load(f)\n",
|
||||
" \n",
|
||||
" dict_pics = {}\n",
|
||||
" for k in temp_pics_source_list:\n",
|
||||
" try: \n",
|
||||
" patt_1 = re.search(r'[^/]+(?=/\\$_|.(\\.jpg|\\.jpeg|\\.png))', k, re.IGNORECASE)\n",
|
||||
" patt_2 = re.search(r'(\\.jpg|\\.jpeg|\\.png)', k, re.IGNORECASE)\n",
|
||||
" if patt_1 and patt_2 is not None:\n",
|
||||
" tag = patt_1.group() + patt_2.group().lower()\n",
|
||||
" file_name = target_dir + os.sep + tag\n",
|
||||
" dict_pics.update({k:file_name})\n",
|
||||
" except TypeError:\n",
|
||||
" print(k)\n",
|
||||
" print(\"{source:target} dictionary created @ \" + target_dir)\n",
|
||||
" return dict_pics"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "0095fa33",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def cleanup():\n",
|
||||
" with open('women_cat_list.txt') as f:\n",
|
||||
" women_cats = json.load(f)\n",
|
||||
" with open('men_cat_list.txt') as f:\n",
|
||||
" men_cats = json.load(f)\n",
|
||||
"\n",
|
||||
" with open('temp_pics_source_list.txt') as f:\n",
|
||||
" tempics = json.load(f)\n",
|
||||
" # list of image urls that did not get named properly which will be removed from the dataframe\n",
|
||||
" drop_row_vals = []\n",
|
||||
" for pic in tempics:\n",
|
||||
" try:\n",
|
||||
" dict_pics[pic]\n",
|
||||
" except KeyError:\n",
|
||||
" drop_row_vals.append(pic)\n",
|
||||
"\n",
|
||||
" df['PrimaryCategoryID'] = df['PrimaryCategoryID'].astype(str) # pandas thinks ids are ints\n",
|
||||
" df = df[df.PictureURL.isin(drop_row_vals)==False] # remove improperly named image files\n",
|
||||
" df = df[df.PrimaryCategoryID.isin(men_cats)==False] # removes rows of womens categories\n",
|
||||
"\n",
|
||||
" blah = pd.Series(df.PictureURL)\n",
|
||||
" df = df.drop(labels=['PictureURL'], axis=1)\n",
|
||||
"\n",
|
||||
" blah = blah.apply(lambda x: dict_pics[x])\n",
|
||||
" df = pd.concat([blah, df],axis=1)\n",
|
||||
" df = df.groupby('PrimaryCategoryID').filter(lambda x: len(x)>25) # removes cat outliers\n",
|
||||
" return df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "edd196dc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
37
try.py
37
try.py
@ -1,37 +0,0 @@
|
||||
import ebaysdk
|
||||
import json
|
||||
import requests
|
||||
import concurrent.futures
|
||||
import config as cfg
|
||||
from ebaysdk.shopping import Connection as Shopping
|
||||
from ebaysdk.trading import Connection as Trading
|
||||
sapi = Shopping(config_file = 'ebay.yaml')
|
||||
tapi = Trading(config_file='ebay.yaml')
|
||||
|
||||
def get_cat_specs(cat):
|
||||
|
||||
response = tapi.execute('GetCategorySpecifics',
|
||||
{'CategoryID':cat})
|
||||
cat_spacs =[name['Name'] for name in response.dict()['Recommendations']['NameRecommendation']]
|
||||
|
||||
return cat_spacs
|
||||
|
||||
with open('cat_list.txt') as f:
|
||||
cat_list = json.load(f)
|
||||
|
||||
def threadd_cat_spacs():
|
||||
|
||||
cat_spacs = []
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for future in executor.map(get_cat_specs, cat_list):
|
||||
cat_spacs.extend(future)
|
||||
|
||||
cat_spacs = list(set(cat_spacs))
|
||||
|
||||
return cat_spacs
|
||||
|
||||
if __name__=='__main__':
|
||||
cat_spacs = threadd_cat_spacs()
|
||||
with open('cat_spacs.txt', 'w') as f:
|
||||
json.dump(cat_spacs, f)
|
@ -1,13 +0,0 @@
|
||||
'''
|
||||
Update dataset; instantiates FindingApi and makes call to eBay's Finding Api
|
||||
using the findItemsByCategory service. Updates the master_ids list and raw_data.
|
||||
'''
|
||||
import ebay_api
|
||||
|
||||
# Make call to ebay Finding service and return list of twenty_id strings
|
||||
finding = ebay_api.FindingApi(4) # 4 is URL paramter for used items
|
||||
twenty_id_list = finding.get_ids_from_cats()[0]
|
||||
|
||||
# renew oauth token and make call to shopping service to get item data and write to local file
|
||||
shopping = ebay_api.ShoppingApi()
|
||||
data = shopping.conky(twenty_id_list)
|
Loading…
Reference in New Issue
Block a user